Test Report: Docker_Linux_containerd_arm64 22186

                    
                      5e28b85a1d78221970a3d6d4a20cdd5c3710ee83:2025-12-18:42830
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 497.98
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 367.85
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 2.34
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 2.26
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 2.4
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 736.28
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 2.3
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 1.92
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 3.2
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.47
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 242
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 3.11
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.09
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.31
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.34
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.41
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.35
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.49
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.15
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 92.62
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.59
358 TestKubernetesUpgrade 802.06
404 TestStartStop/group/no-preload/serial/FirstStart 511.29
437 TestStartStop/group/newest-cni/serial/FirstStart 501.8
438 TestStartStop/group/no-preload/serial/DeployApp 2.93
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 85.85
442 TestStartStop/group/no-preload/serial/SecondStart 370.08
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 80.66
447 TestStartStop/group/newest-cni/serial/SecondStart 375.41
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.24
452 TestStartStop/group/newest-cni/serial/Pause 9.62
466 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 287.94
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (497.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1218 00:21:09.080983 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:23:25.214406 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:23:52.929883 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.397238 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.403636 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.415115 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.436649 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.478107 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.559646 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.721203 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:05.043000 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:05.685160 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:06.966580 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:09.529481 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:14.650926 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:24.892588 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:45.374812 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:26:26.336206 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:27:48.260338 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:28:25.214532 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m16.50082275s)

                                                
                                                
-- stdout --
	* [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Found network options:
	  - HTTP_PROXY=localhost:42501
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:42501 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195767s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195767s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 6 (326.24102ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 00:29:04.044967 1305188 status.go:458] kubeconfig endpoint: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-739047 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464                     │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh -- ls -la /mount-9p                                                                                                             │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh sudo umount -f /mount-9p                                                                                                        │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount3 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount2 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount1                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount1 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount1                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh findmnt -T /mount2                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh findmnt -T /mount3                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ mount          │ -p functional-739047 --kill=true                                                                                                                      │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format short --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image          │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete         │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start          │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:20:47
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:20:47.258293 1299733 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:20:47.258419 1299733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:20:47.258423 1299733 out.go:374] Setting ErrFile to fd 2...
	I1218 00:20:47.258427 1299733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:20:47.258666 1299733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:20:47.259075 1299733 out.go:368] Setting JSON to false
	I1218 00:20:47.259901 1299733 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25394,"bootTime":1765991854,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:20:47.259958 1299733 start.go:143] virtualization:  
	I1218 00:20:47.264515 1299733 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:20:47.269243 1299733 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:20:47.269373 1299733 notify.go:221] Checking for updates...
	I1218 00:20:47.275932 1299733 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:20:47.279143 1299733 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:20:47.282303 1299733 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:20:47.285454 1299733 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:20:47.288599 1299733 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:20:47.291902 1299733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:20:47.316802 1299733 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:20:47.316909 1299733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:20:47.378706 1299733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-18 00:20:47.369615373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:20:47.378798 1299733 docker.go:319] overlay module found
	I1218 00:20:47.382049 1299733 out.go:179] * Using the docker driver based on user configuration
	I1218 00:20:47.385057 1299733 start.go:309] selected driver: docker
	I1218 00:20:47.385065 1299733 start.go:927] validating driver "docker" against <nil>
	I1218 00:20:47.385076 1299733 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:20:47.385829 1299733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:20:47.441404 1299733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-18 00:20:47.431787837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:20:47.441557 1299733 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1218 00:20:47.441775 1299733 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 00:20:47.444924 1299733 out.go:179] * Using Docker driver with root privileges
	I1218 00:20:47.447880 1299733 cni.go:84] Creating CNI manager for ""
	I1218 00:20:47.447934 1299733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:20:47.447941 1299733 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 00:20:47.448009 1299733 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:20:47.452982 1299733 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:20:47.455922 1299733 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:20:47.458854 1299733 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:20:47.461757 1299733 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:20:47.461797 1299733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:20:47.461817 1299733 cache.go:65] Caching tarball of preloaded images
	I1218 00:20:47.461848 1299733 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:20:47.461906 1299733 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:20:47.461915 1299733 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:20:47.462244 1299733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:20:47.462262 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json: {Name:mk0e5327bdfc651586437cd1e3d43df2deb645ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:47.482082 1299733 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:20:47.482093 1299733 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:20:47.482112 1299733 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:20:47.482141 1299733 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:20:47.482276 1299733 start.go:364] duration metric: took 120.424µs to acquireMachinesLock for "functional-232602"
	I1218 00:20:47.482301 1299733 start.go:93] Provisioning new machine with config: &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 00:20:47.482363 1299733 start.go:125] createHost starting for "" (driver="docker")
	I1218 00:20:47.485796 1299733 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1218 00:20:47.486093 1299733 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:42501 to docker env.
	I1218 00:20:47.486117 1299733 start.go:159] libmachine.API.Create for "functional-232602" (driver="docker")
	I1218 00:20:47.486139 1299733 client.go:173] LocalClient.Create starting
	I1218 00:20:47.486211 1299733 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 00:20:47.486244 1299733 main.go:143] libmachine: Decoding PEM data...
	I1218 00:20:47.486261 1299733 main.go:143] libmachine: Parsing certificate...
	I1218 00:20:47.486320 1299733 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 00:20:47.486337 1299733 main.go:143] libmachine: Decoding PEM data...
	I1218 00:20:47.486347 1299733 main.go:143] libmachine: Parsing certificate...
	I1218 00:20:47.486702 1299733 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 00:20:47.503453 1299733 cli_runner.go:211] docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 00:20:47.503546 1299733 network_create.go:284] running [docker network inspect functional-232602] to gather additional debugging logs...
	I1218 00:20:47.503570 1299733 cli_runner.go:164] Run: docker network inspect functional-232602
	W1218 00:20:47.520235 1299733 cli_runner.go:211] docker network inspect functional-232602 returned with exit code 1
	I1218 00:20:47.520255 1299733 network_create.go:287] error running [docker network inspect functional-232602]: docker network inspect functional-232602: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-232602 not found
	I1218 00:20:47.520267 1299733 network_create.go:289] output of [docker network inspect functional-232602]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-232602 not found
	
	** /stderr **
	I1218 00:20:47.520355 1299733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:20:47.537086 1299733 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018fbb90}
	I1218 00:20:47.537127 1299733 network_create.go:124] attempt to create docker network functional-232602 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 00:20:47.537184 1299733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-232602 functional-232602
	I1218 00:20:47.595672 1299733 network_create.go:108] docker network functional-232602 192.168.49.0/24 created
	I1218 00:20:47.595700 1299733 kic.go:121] calculated static IP "192.168.49.2" for the "functional-232602" container
	I1218 00:20:47.595777 1299733 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 00:20:47.612034 1299733 cli_runner.go:164] Run: docker volume create functional-232602 --label name.minikube.sigs.k8s.io=functional-232602 --label created_by.minikube.sigs.k8s.io=true
	I1218 00:20:47.630758 1299733 oci.go:103] Successfully created a docker volume functional-232602
	I1218 00:20:47.630835 1299733 cli_runner.go:164] Run: docker run --rm --name functional-232602-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232602 --entrypoint /usr/bin/test -v functional-232602:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 00:20:48.197202 1299733 oci.go:107] Successfully prepared a docker volume functional-232602
	I1218 00:20:48.197264 1299733 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:20:48.197272 1299733 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 00:20:48.197358 1299733 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232602:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 00:20:52.120424 1299733 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232602:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.923032655s)
	I1218 00:20:52.120446 1299733 kic.go:203] duration metric: took 3.923171385s to extract preloaded images to volume ...
	W1218 00:20:52.120588 1299733 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 00:20:52.120721 1299733 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 00:20:52.178767 1299733 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-232602 --name functional-232602 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232602 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-232602 --network functional-232602 --ip 192.168.49.2 --volume functional-232602:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 00:20:52.465063 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Running}}
	I1218 00:20:52.489888 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:20:52.508863 1299733 cli_runner.go:164] Run: docker exec functional-232602 stat /var/lib/dpkg/alternatives/iptables
	I1218 00:20:52.560963 1299733 oci.go:144] the created container "functional-232602" has a running status.
	I1218 00:20:52.560982 1299733 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa...
	I1218 00:20:53.371070 1299733 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 00:20:53.397615 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:20:53.417874 1299733 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 00:20:53.417885 1299733 kic_runner.go:114] Args: [docker exec --privileged functional-232602 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 00:20:53.464219 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:20:53.489619 1299733 machine.go:94] provisionDockerMachine start ...
	I1218 00:20:53.489711 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:53.509494 1299733 main.go:143] libmachine: Using SSH client type: native
	I1218 00:20:53.509871 1299733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:20:53.509878 1299733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:20:53.676099 1299733 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:20:53.676127 1299733 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:20:53.676199 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:53.697836 1299733 main.go:143] libmachine: Using SSH client type: native
	I1218 00:20:53.698126 1299733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:20:53.698134 1299733 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:20:53.873952 1299733 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:20:53.874032 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:53.890932 1299733 main.go:143] libmachine: Using SSH client type: native
	I1218 00:20:53.891244 1299733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:20:53.891261 1299733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:20:54.044973 1299733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:20:54.044990 1299733 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:20:54.045008 1299733 ubuntu.go:190] setting up certificates
	I1218 00:20:54.045016 1299733 provision.go:84] configureAuth start
	I1218 00:20:54.045076 1299733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:20:54.062459 1299733 provision.go:143] copyHostCerts
	I1218 00:20:54.062522 1299733 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:20:54.062530 1299733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:20:54.062609 1299733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:20:54.062707 1299733 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:20:54.062711 1299733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:20:54.062736 1299733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:20:54.062794 1299733 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:20:54.062797 1299733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:20:54.062821 1299733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:20:54.062879 1299733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:20:54.190472 1299733 provision.go:177] copyRemoteCerts
	I1218 00:20:54.190523 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:20:54.190569 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:54.208147 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:20:54.316446 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:20:54.335031 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:20:54.352732 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 00:20:54.369946 1299733 provision.go:87] duration metric: took 324.879384ms to configureAuth
	I1218 00:20:54.369963 1299733 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:20:54.370172 1299733 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:20:54.370177 1299733 machine.go:97] duration metric: took 880.548208ms to provisionDockerMachine
	I1218 00:20:54.370185 1299733 client.go:176] duration metric: took 6.884042324s to LocalClient.Create
	I1218 00:20:54.370201 1299733 start.go:167] duration metric: took 6.88408504s to libmachine.API.Create "functional-232602"
	I1218 00:20:54.370207 1299733 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:20:54.370217 1299733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:20:54.370310 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:20:54.370357 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:54.387674 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:20:54.496664 1299733 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:20:54.499873 1299733 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:20:54.499891 1299733 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:20:54.499902 1299733 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:20:54.499958 1299733 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:20:54.500044 1299733 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:20:54.500121 1299733 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:20:54.500178 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:20:54.507585 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:20:54.525591 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:20:54.543747 1299733 start.go:296] duration metric: took 173.52649ms for postStartSetup
	I1218 00:20:54.544118 1299733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:20:54.561132 1299733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:20:54.561409 1299733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:20:54.561447 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:54.578288 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:20:54.681579 1299733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:20:54.686248 1299733 start.go:128] duration metric: took 7.203869534s to createHost
	I1218 00:20:54.686263 1299733 start.go:83] releasing machines lock for "functional-232602", held for 7.203978964s
	I1218 00:20:54.686332 1299733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:20:54.708820 1299733 out.go:179] * Found network options:
	I1218 00:20:54.711789 1299733 out.go:179]   - HTTP_PROXY=localhost:42501
	W1218 00:20:54.714647 1299733 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1218 00:20:54.717612 1299733 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1218 00:20:54.720543 1299733 ssh_runner.go:195] Run: cat /version.json
	I1218 00:20:54.720587 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:54.720644 1299733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:20:54.720703 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:20:54.738768 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:20:54.740697 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:20:54.844212 1299733 ssh_runner.go:195] Run: systemctl --version
	I1218 00:20:54.937885 1299733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 00:20:54.942554 1299733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:20:54.942627 1299733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:20:54.969774 1299733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 00:20:54.969789 1299733 start.go:496] detecting cgroup driver to use...
	I1218 00:20:54.969830 1299733 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:20:54.969882 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:20:54.984329 1299733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:20:54.996960 1299733 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:20:54.997013 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:20:55.041871 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:20:55.070165 1299733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:20:55.214048 1299733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:20:55.338714 1299733 docker.go:234] disabling docker service ...
	I1218 00:20:55.338772 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:20:55.361776 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:20:55.375748 1299733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:20:55.492547 1299733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:20:55.601361 1299733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:20:55.614096 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:20:55.627575 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:20:55.636509 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:20:55.645793 1299733 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:20:55.645868 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:20:55.654853 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:20:55.663826 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:20:55.672369 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:20:55.681023 1299733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:20:55.689350 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:20:55.697956 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:20:55.706607 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:20:55.716460 1299733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:20:55.723996 1299733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:20:55.731555 1299733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:20:55.845481 1299733 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:20:55.980328 1299733 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:20:55.980396 1299733 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:20:55.984484 1299733 start.go:564] Will wait 60s for crictl version
	I1218 00:20:55.984541 1299733 ssh_runner.go:195] Run: which crictl
	I1218 00:20:55.988184 1299733 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:20:56.014215 1299733 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:20:56.014281 1299733 ssh_runner.go:195] Run: containerd --version
	I1218 00:20:56.038843 1299733 ssh_runner.go:195] Run: containerd --version
	I1218 00:20:56.066737 1299733 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:20:56.069804 1299733 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:20:56.086697 1299733 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:20:56.090661 1299733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 00:20:56.100562 1299733 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:20:56.100695 1299733 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:20:56.100759 1299733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:20:56.125716 1299733 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:20:56.125729 1299733 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:20:56.125794 1299733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:20:56.153103 1299733 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:20:56.153116 1299733 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:20:56.153123 1299733 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:20:56.153214 1299733 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:20:56.153283 1299733 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:20:56.176942 1299733 cni.go:84] Creating CNI manager for ""
	I1218 00:20:56.176953 1299733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:20:56.176968 1299733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:20:56.176989 1299733 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:20:56.177097 1299733 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:20:56.177163 1299733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:20:56.185490 1299733 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:20:56.185556 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:20:56.194059 1299733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:20:56.207490 1299733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:20:56.220180 1299733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 00:20:56.233327 1299733 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:20:56.237102 1299733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 00:20:56.247077 1299733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:20:56.354727 1299733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:20:56.370734 1299733 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:20:56.370745 1299733 certs.go:195] generating shared ca certs ...
	I1218 00:20:56.370759 1299733 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.370899 1299733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:20:56.370941 1299733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:20:56.370947 1299733 certs.go:257] generating profile certs ...
	I1218 00:20:56.371009 1299733 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:20:56.371018 1299733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt with IP's: []
	I1218 00:20:56.572670 1299733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt ...
	I1218 00:20:56.572688 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: {Name:mk002b9fd89396a08ba8aeecbad98a7698da5b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.572891 1299733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key ...
	I1218 00:20:56.572897 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key: {Name:mkc8c019d2d5154bb1375f4761c3e2dfc2d15280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.572994 1299733 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:20:56.573006 1299733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1218 00:20:56.803308 1299733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8 ...
	I1218 00:20:56.803324 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8: {Name:mk03c8ba32fdbe7b0ee88e382a35aa5c6df473b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.803512 1299733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8 ...
	I1218 00:20:56.803520 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8: {Name:mkd1d489170e43b3ee96768b44fda4a7baa0a1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.803604 1299733 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt
	I1218 00:20:56.803685 1299733 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key
	I1218 00:20:56.803738 1299733 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:20:56.803750 1299733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt with IP's: []
	I1218 00:20:56.972062 1299733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt ...
	I1218 00:20:56.972078 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt: {Name:mkc5a73242ecc19a02344f7df5b3bfc837658efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.972263 1299733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key ...
	I1218 00:20:56.972280 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key: {Name:mk22ccb603e4f887e66caba9e8f646be1037eda3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:20:56.972476 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:20:56.972517 1299733 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:20:56.972525 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:20:56.972549 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:20:56.972571 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:20:56.972594 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:20:56.972660 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:20:56.973219 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:20:56.992512 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:20:57.014831 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:20:57.033965 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:20:57.051553 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:20:57.069634 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:20:57.086917 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:20:57.104759 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:20:57.123062 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:20:57.141354 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:20:57.163155 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:20:57.182372 1299733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:20:57.196347 1299733 ssh_runner.go:195] Run: openssl version
	I1218 00:20:57.203670 1299733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:20:57.211722 1299733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:20:57.219548 1299733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:20:57.223312 1299733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:20:57.223367 1299733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:20:57.264981 1299733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:20:57.272675 1299733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 00:20:57.280571 1299733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:20:57.288189 1299733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:20:57.296038 1299733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:20:57.299781 1299733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:20:57.299852 1299733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:20:57.341885 1299733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:20:57.349662 1299733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 00:20:57.357223 1299733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:20:57.364781 1299733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:20:57.372134 1299733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:20:57.375869 1299733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:20:57.375927 1299733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:20:57.418297 1299733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:20:57.425777 1299733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 00:20:57.433300 1299733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:20:57.437036 1299733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 00:20:57.437079 1299733 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:20:57.437145 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:20:57.437218 1299733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:20:57.468987 1299733 cri.go:89] found id: ""
	I1218 00:20:57.469046 1299733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:20:57.476736 1299733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:20:57.484358 1299733 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:20:57.484412 1299733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:20:57.492092 1299733 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:20:57.492101 1299733 kubeadm.go:158] found existing configuration files:
	
	I1218 00:20:57.492168 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:20:57.500112 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:20:57.500171 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:20:57.507704 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:20:57.515390 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:20:57.515445 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:20:57.523065 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:20:57.530871 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:20:57.530940 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:20:57.538595 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:20:57.546266 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:20:57.546329 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:20:57.553605 1299733 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:20:57.590567 1299733 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:20:57.590616 1299733 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:20:57.668789 1299733 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:20:57.668854 1299733 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:20:57.668887 1299733 kubeadm.go:319] OS: Linux
	I1218 00:20:57.668931 1299733 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:20:57.668978 1299733 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:20:57.669024 1299733 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:20:57.669071 1299733 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:20:57.669117 1299733 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:20:57.669170 1299733 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:20:57.669214 1299733 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:20:57.669260 1299733 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:20:57.669305 1299733 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:20:57.734464 1299733 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:20:57.734568 1299733 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:20:57.734657 1299733 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:20:57.741180 1299733 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:20:57.746689 1299733 out.go:252]   - Generating certificates and keys ...
	I1218 00:20:57.746815 1299733 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:20:57.746889 1299733 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:20:57.905686 1299733 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 00:20:58.168505 1299733 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 00:20:58.293672 1299733 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 00:20:58.460852 1299733 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 00:20:59.137900 1299733 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 00:20:59.138199 1299733 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 00:20:59.499789 1299733 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 00:20:59.500152 1299733 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 00:20:59.613582 1299733 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 00:20:59.719414 1299733 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 00:20:59.785473 1299733 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 00:20:59.785696 1299733 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:20:59.991205 1299733 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:21:00.141545 1299733 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:21:00.199456 1299733 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:21:00.547066 1299733 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:21:00.852695 1299733 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:21:00.853401 1299733 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:21:00.858094 1299733 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:21:00.861857 1299733 out.go:252]   - Booting up control plane ...
	I1218 00:21:00.861971 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:21:00.862072 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:21:00.862713 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:21:00.890917 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:21:00.891042 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:21:00.898881 1299733 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:21:00.899209 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:21:00.899267 1299733 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:21:01.045206 1299733 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:21:01.045327 1299733 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:25:01.043114 1299733 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000229243s
	I1218 00:25:01.043134 1299733 kubeadm.go:319] 
	I1218 00:25:01.043539 1299733 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:25:01.043613 1299733 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:25:01.043800 1299733 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:25:01.043808 1299733 kubeadm.go:319] 
	I1218 00:25:01.044153 1299733 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:25:01.044209 1299733 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:25:01.044262 1299733 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:25:01.044266 1299733 kubeadm.go:319] 
	I1218 00:25:01.049563 1299733 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:25:01.050002 1299733 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:25:01.050114 1299733 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:25:01.050379 1299733 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 00:25:01.050385 1299733 kubeadm.go:319] 
	I1218 00:25:01.050451 1299733 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1218 00:25:01.050583 1299733 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000229243s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 00:25:01.050666 1299733 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:25:01.460466 1299733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:25:01.474204 1299733 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:25:01.474262 1299733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:25:01.482019 1299733 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:25:01.482028 1299733 kubeadm.go:158] found existing configuration files:
	
	I1218 00:25:01.482090 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:25:01.489976 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:25:01.490052 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:25:01.497758 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:25:01.505630 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:25:01.505687 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:25:01.513023 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:25:01.521855 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:25:01.521918 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:25:01.529855 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:25:01.537737 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:25:01.537801 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:25:01.545555 1299733 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:25:01.585938 1299733 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:25:01.585993 1299733 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:25:01.663035 1299733 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:25:01.663106 1299733 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:25:01.663140 1299733 kubeadm.go:319] OS: Linux
	I1218 00:25:01.663185 1299733 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:25:01.663231 1299733 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:25:01.663277 1299733 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:25:01.663324 1299733 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:25:01.663371 1299733 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:25:01.663422 1299733 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:25:01.663466 1299733 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:25:01.663513 1299733 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:25:01.663558 1299733 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:25:01.731113 1299733 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:25:01.731244 1299733 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:25:01.731342 1299733 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:25:01.741084 1299733 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:25:01.746702 1299733 out.go:252]   - Generating certificates and keys ...
	I1218 00:25:01.746801 1299733 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:25:01.746871 1299733 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:25:01.746952 1299733 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:25:01.747017 1299733 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:25:01.747098 1299733 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:25:01.747157 1299733 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:25:01.747224 1299733 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:25:01.747291 1299733 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:25:01.747370 1299733 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:25:01.747446 1299733 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:25:01.747483 1299733 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:25:01.747544 1299733 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:25:01.988776 1299733 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:25:02.326144 1299733 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:25:02.628849 1299733 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:25:02.775195 1299733 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:25:03.087870 1299733 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:25:03.088661 1299733 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:25:03.091595 1299733 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:25:03.094841 1299733 out.go:252]   - Booting up control plane ...
	I1218 00:25:03.094942 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:25:03.095019 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:25:03.096349 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:25:03.117972 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:25:03.118083 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:25:03.126667 1299733 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:25:03.127045 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:25:03.127267 1299733 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:25:03.267256 1299733 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:25:03.267371 1299733 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:29:03.266822 1299733 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000195767s
	I1218 00:29:03.266840 1299733 kubeadm.go:319] 
	I1218 00:29:03.267205 1299733 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:29:03.267271 1299733 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:29:03.267461 1299733 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:29:03.267466 1299733 kubeadm.go:319] 
	I1218 00:29:03.267879 1299733 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:29:03.268161 1299733 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:29:03.268217 1299733 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:29:03.268222 1299733 kubeadm.go:319] 
	I1218 00:29:03.272904 1299733 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:29:03.273321 1299733 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:29:03.273425 1299733 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:29:03.273714 1299733 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 00:29:03.273726 1299733 kubeadm.go:319] 
	I1218 00:29:03.273804 1299733 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:29:03.273873 1299733 kubeadm.go:403] duration metric: took 8m5.836797344s to StartCluster
	I1218 00:29:03.273907 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:29:03.273969 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:29:03.302314 1299733 cri.go:89] found id: ""
	I1218 00:29:03.302336 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.302344 1299733 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:29:03.302349 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:29:03.302407 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:29:03.330654 1299733 cri.go:89] found id: ""
	I1218 00:29:03.330668 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.330676 1299733 logs.go:284] No container was found matching "etcd"
	I1218 00:29:03.330684 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:29:03.330748 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:29:03.357980 1299733 cri.go:89] found id: ""
	I1218 00:29:03.357994 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.358001 1299733 logs.go:284] No container was found matching "coredns"
	I1218 00:29:03.358006 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:29:03.358064 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:29:03.386444 1299733 cri.go:89] found id: ""
	I1218 00:29:03.386458 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.386465 1299733 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:29:03.386470 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:29:03.386531 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:29:03.416098 1299733 cri.go:89] found id: ""
	I1218 00:29:03.416123 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.416130 1299733 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:29:03.416135 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:29:03.416208 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:29:03.449611 1299733 cri.go:89] found id: ""
	I1218 00:29:03.449638 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.449645 1299733 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:29:03.449651 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:29:03.449719 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:29:03.475818 1299733 cri.go:89] found id: ""
	I1218 00:29:03.475842 1299733 logs.go:282] 0 containers: []
	W1218 00:29:03.475850 1299733 logs.go:284] No container was found matching "kindnet"
	I1218 00:29:03.475858 1299733 logs.go:123] Gathering logs for kubelet ...
	I1218 00:29:03.475869 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:29:03.535401 1299733 logs.go:123] Gathering logs for dmesg ...
	I1218 00:29:03.535420 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:29:03.550585 1299733 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:29:03.550604 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:29:03.618511 1299733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:29:03.609577    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.610312    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.611858    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.612439    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.614002    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:29:03.609577    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.610312    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.611858    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.612439    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:03.614002    4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:29:03.618521 1299733 logs.go:123] Gathering logs for containerd ...
	I1218 00:29:03.618533 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:29:03.656888 1299733 logs.go:123] Gathering logs for container status ...
	I1218 00:29:03.656907 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 00:29:03.685692 1299733 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195767s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 00:29:03.685732 1299733 out.go:285] * 
	W1218 00:29:03.685795 1299733 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195767s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:29:03.685849 1299733 out.go:285] * 
	W1218 00:29:03.687965 1299733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:29:03.692973 1299733 out.go:203] 
	W1218 00:29:03.696844 1299733 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195767s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:29:03.696895 1299733 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 00:29:03.696914 1299733 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 00:29:03.700130 1299733 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918779954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918792951Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918833213Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918850033Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918881548Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918895825Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918905621Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918919004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918935299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918963081Z" level=info msg="Connect containerd service"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.919244830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.919797795Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.939792958Z" level=info msg="Start subscribing containerd event"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.939884665Z" level=info msg="Start recovering state"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.940513394Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.940712824Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977535309Z" level=info msg="Start event monitor"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977731998Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977826052Z" level=info msg="Start streaming server"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977899855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977958471Z" level=info msg="runtime interface starting up..."
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.978014208Z" level=info msg="starting plugins..."
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.978082424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 00:20:55 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.980333395Z" level=info msg="containerd successfully booted in 0.083017s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:29:04.691398    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:04.691947    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:04.693591    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:04.694137    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:29:04.695897    4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:29:04 up  7:11,  0 user,  load average: 0.17, 0.47, 0.88
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 18 00:29:01 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:01 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:01 functional-232602 kubelet[4677]: E1218 00:29:01.947913    4677 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:29:02 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 18 00:29:02 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:02 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:02 functional-232602 kubelet[4682]: E1218 00:29:02.695044    4682 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:29:02 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:29:02 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:29:03 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 18 00:29:03 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:03 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:03 functional-232602 kubelet[4720]: E1218 00:29:03.455995    4720 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:29:03 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:29:03 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:29:04 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 00:29:04 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:04 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:29:04 functional-232602 kubelet[4789]: E1218 00:29:04.206712    4789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:29:04 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:29:04 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 6 (361.746793ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 00:29:05.177416 1305408 status.go:458] kubeconfig endpoint: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (497.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (367.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1218 00:29:05.193644 1261148 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232602 --alsologtostderr -v=8
E1218 00:30:04.395042 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:30:32.102478 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:33:25.215164 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:34:48.292038 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:35:04.395275 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232602 --alsologtostderr -v=8: exit status 80 (6m5.227655117s)

                                                
                                                
-- stdout --
	* [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:29:05.243654 1305484 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:29:05.243837 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.243867 1305484 out.go:374] Setting ErrFile to fd 2...
	I1218 00:29:05.243888 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.244277 1305484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:29:05.244868 1305484 out.go:368] Setting JSON to false
	I1218 00:29:05.245808 1305484 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25892,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:29:05.245939 1305484 start.go:143] virtualization:  
	I1218 00:29:05.249423 1305484 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:29:05.253059 1305484 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:29:05.253187 1305484 notify.go:221] Checking for updates...
	I1218 00:29:05.259241 1305484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:29:05.262171 1305484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:05.265173 1305484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:29:05.268135 1305484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:29:05.270950 1305484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:29:05.274293 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:05.274440 1305484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:29:05.308275 1305484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:29:05.308407 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.375725 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.366230286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.375834 1305484 docker.go:319] overlay module found
	I1218 00:29:05.378939 1305484 out.go:179] * Using the docker driver based on existing profile
	I1218 00:29:05.381619 1305484 start.go:309] selected driver: docker
	I1218 00:29:05.381657 1305484 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.381752 1305484 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:29:05.381892 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.440724 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.431205912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.441147 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:05.441215 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:05.441270 1305484 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.444475 1305484 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:29:05.447488 1305484 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:29:05.450519 1305484 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:29:05.453580 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:05.453631 1305484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:29:05.453641 1305484 cache.go:65] Caching tarball of preloaded images
	I1218 00:29:05.453681 1305484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:29:05.453745 1305484 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:29:05.453756 1305484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:29:05.453862 1305484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:29:05.474116 1305484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:29:05.474140 1305484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:29:05.474160 1305484 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:29:05.474205 1305484 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:29:05.474271 1305484 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "functional-232602"
	I1218 00:29:05.474294 1305484 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:29:05.474305 1305484 fix.go:54] fixHost starting: 
	I1218 00:29:05.474585 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:05.494473 1305484 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:29:05.494511 1305484 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:29:05.497625 1305484 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:29:05.497657 1305484 machine.go:94] provisionDockerMachine start ...
	I1218 00:29:05.497756 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.514682 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.515020 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.515044 1305484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:29:05.668376 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.668400 1305484 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:29:05.668465 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.700140 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.700482 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.700495 1305484 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:29:05.865944 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.866034 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.884487 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.884983 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.885010 1305484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:29:06.041516 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:29:06.041541 1305484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:29:06.041561 1305484 ubuntu.go:190] setting up certificates
	I1218 00:29:06.041572 1305484 provision.go:84] configureAuth start
	I1218 00:29:06.041652 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.060898 1305484 provision.go:143] copyHostCerts
	I1218 00:29:06.060951 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.060994 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:29:06.061002 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.061080 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:29:06.061163 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061182 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:29:06.061187 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061215 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:29:06.061256 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061273 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:29:06.061277 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061301 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:29:06.061349 1305484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:29:06.177802 1305484 provision.go:177] copyRemoteCerts
	I1218 00:29:06.177898 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:29:06.177967 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.195440 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.308765 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 00:29:06.308835 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:29:06.326972 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 00:29:06.327095 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:29:06.345137 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 00:29:06.345225 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:29:06.363588 1305484 provision.go:87] duration metric: took 321.991809ms to configureAuth
	I1218 00:29:06.363617 1305484 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:29:06.363812 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:06.363826 1305484 machine.go:97] duration metric: took 866.163062ms to provisionDockerMachine
	I1218 00:29:06.363833 1305484 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:29:06.363845 1305484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:29:06.363904 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:29:06.363949 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.381445 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.493044 1305484 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:29:06.496574 1305484 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1218 00:29:06.496595 1305484 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1218 00:29:06.496599 1305484 command_runner.go:130] > VERSION_ID="12"
	I1218 00:29:06.496604 1305484 command_runner.go:130] > VERSION="12 (bookworm)"
	I1218 00:29:06.496612 1305484 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1218 00:29:06.496615 1305484 command_runner.go:130] > ID=debian
	I1218 00:29:06.496641 1305484 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1218 00:29:06.496649 1305484 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1218 00:29:06.496655 1305484 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1218 00:29:06.496744 1305484 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:29:06.496762 1305484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:29:06.496773 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:29:06.496837 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:29:06.496920 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:29:06.496932 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /etc/ssl/certs/12611482.pem
	I1218 00:29:06.497013 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:29:06.497022 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> /etc/test/nested/copy/1261148/hosts
	I1218 00:29:06.497083 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:29:06.504772 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:06.523736 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:29:06.542759 1305484 start.go:296] duration metric: took 178.908993ms for postStartSetup
	I1218 00:29:06.542856 1305484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:29:06.542901 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.560753 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.665778 1305484 command_runner.go:130] > 18%
	I1218 00:29:06.665854 1305484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:29:06.671095 1305484 command_runner.go:130] > 160G
	I1218 00:29:06.671651 1305484 fix.go:56] duration metric: took 1.19734099s for fixHost
	I1218 00:29:06.671671 1305484 start.go:83] releasing machines lock for "functional-232602", held for 1.197387766s
	I1218 00:29:06.671738 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.688941 1305484 ssh_runner.go:195] Run: cat /version.json
	I1218 00:29:06.689003 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.689377 1305484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:29:06.689435 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.710307 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.721003 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.812429 1305484 command_runner.go:130] > {"iso_version": "v1.37.0-1765846775-22141", "kicbase_version": "v0.0.48-1765966054-22186", "minikube_version": "v1.37.0", "commit": "c344550999bcbb78f38b2df057224788bb2d30b2"}
	I1218 00:29:06.812585 1305484 ssh_runner.go:195] Run: systemctl --version
	I1218 00:29:06.910410 1305484 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 00:29:06.913301 1305484 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1218 00:29:06.913347 1305484 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1218 00:29:06.913421 1305484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 00:29:06.917811 1305484 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 00:29:06.917849 1305484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:29:06.917931 1305484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:29:06.925837 1305484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:29:06.925861 1305484 start.go:496] detecting cgroup driver to use...
	I1218 00:29:06.925891 1305484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:29:06.925936 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:29:06.941416 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:29:06.954870 1305484 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:29:06.954953 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:29:06.971407 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:29:06.985680 1305484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:29:07.097075 1305484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:29:07.240817 1305484 docker.go:234] disabling docker service ...
	I1218 00:29:07.240965 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:29:07.256804 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:29:07.271026 1305484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:29:07.407005 1305484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:29:07.534286 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:29:07.548592 1305484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:29:07.562819 1305484 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 00:29:07.564071 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:29:07.574541 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:29:07.583515 1305484 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:29:07.583615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:29:07.592330 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.601414 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:29:07.610399 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.619445 1305484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:29:07.627615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:29:07.637099 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:29:07.646771 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:29:07.656000 1305484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:29:07.663026 1305484 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 00:29:07.664029 1305484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:29:07.671707 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:07.789368 1305484 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:29:07.948156 1305484 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:29:07.948230 1305484 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:29:07.952108 1305484 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1218 00:29:07.952130 1305484 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 00:29:07.952136 1305484 command_runner.go:130] > Device: 0,72	Inode: 1611        Links: 1
	I1218 00:29:07.952144 1305484 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:07.952150 1305484 command_runner.go:130] > Access: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952154 1305484 command_runner.go:130] > Modify: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952160 1305484 command_runner.go:130] > Change: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952164 1305484 command_runner.go:130] >  Birth: -
	I1218 00:29:07.952461 1305484 start.go:564] Will wait 60s for crictl version
	I1218 00:29:07.952520 1305484 ssh_runner.go:195] Run: which crictl
	I1218 00:29:07.958389 1305484 command_runner.go:130] > /usr/local/bin/crictl
	I1218 00:29:07.959041 1305484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:29:07.980682 1305484 command_runner.go:130] > Version:  0.1.0
	I1218 00:29:07.980702 1305484 command_runner.go:130] > RuntimeName:  containerd
	I1218 00:29:07.980709 1305484 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1218 00:29:07.980714 1305484 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 00:29:07.982988 1305484 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:29:07.983059 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.002890 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.002977 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.027238 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.034949 1305484 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:29:08.037919 1305484 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:29:08.055210 1305484 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:29:08.059294 1305484 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1218 00:29:08.059421 1305484 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:29:08.059535 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:08.059617 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.084496 1305484 command_runner.go:130] > {
	I1218 00:29:08.084519 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.084525 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084534 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.084540 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084546 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.084550 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084554 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084566 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.084574 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084578 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.084582 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084589 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084593 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084596 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084609 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.084616 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084642 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.084646 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084651 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084659 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.084666 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084671 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.084678 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084682 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084686 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084689 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084696 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.084705 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084716 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.084722 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084731 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084739 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.084751 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084756 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.084760 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.084764 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084768 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084777 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084786 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.084791 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084802 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.084805 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084810 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084818 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.084824 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084829 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.084835 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084839 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084851 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084855 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084860 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084863 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084868 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084876 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.084883 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084888 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.084892 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084896 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084905 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.084917 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084922 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.084929 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084943 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084946 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084957 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084961 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084965 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084968 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084975 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.084983 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084991 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.084998 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085003 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085019 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.085026 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085033 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.085037 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085041 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085044 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085050 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085054 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085057 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085060 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085067 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.085073 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085078 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.085084 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085088 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085106 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.085110 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085114 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.085124 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085128 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085132 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085138 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085148 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.085153 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085160 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.085166 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085170 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085182 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.085191 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085195 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.085199 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085203 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085206 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085224 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085228 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085231 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085235 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085244 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.085252 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085258 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.085264 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085270 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085278 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.085287 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085291 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.085296 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085300 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.085306 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085313 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085317 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.085320 1305484 command_runner.go:130] >     }
	I1218 00:29:08.085323 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.085325 1305484 command_runner.go:130] > }
	I1218 00:29:08.087939 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.087964 1305484 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:29:08.088036 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.111236 1305484 command_runner.go:130] > {
	I1218 00:29:08.111264 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.111269 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111279 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.111286 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111295 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.111298 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111302 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111311 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.111318 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111322 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.111330 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111334 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111337 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111340 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111347 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.111352 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111358 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.111364 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111368 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111379 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.111391 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111396 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.111400 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111404 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111407 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111410 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111417 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.111421 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111426 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.111429 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111437 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111447 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.111454 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111462 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.111467 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.111475 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111478 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111483 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111491 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.111499 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111504 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.111507 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111511 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111519 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.111522 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111527 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.111533 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111537 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111543 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111547 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111559 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111562 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111565 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111573 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.111580 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111585 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.111588 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111592 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111600 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.111606 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111611 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.111617 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111626 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111632 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111635 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111639 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111646 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111652 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111659 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.111662 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111668 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.111671 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111676 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111690 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.111697 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111701 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.111707 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111711 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111716 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111720 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111739 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111742 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111746 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111755 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.111759 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111768 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.111771 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111775 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111785 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.111798 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111802 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.111805 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111809 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111813 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111816 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111825 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.111835 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111840 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.111843 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111855 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111866 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.111872 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111876 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.111880 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111884 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111889 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111893 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111899 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111903 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111913 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111921 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.111925 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111929 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.111933 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111937 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111947 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.111959 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111963 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.111967 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111971 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.111978 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111982 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111989 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.111992 1305484 command_runner.go:130] >     }
	I1218 00:29:08.112001 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.112004 1305484 command_runner.go:130] > }
	I1218 00:29:08.114369 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.114392 1305484 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:29:08.114401 1305484 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:29:08.114566 1305484 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:29:08.114639 1305484 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:29:08.137373 1305484 command_runner.go:130] > {
	I1218 00:29:08.137395 1305484 command_runner.go:130] >   "cniconfig": {
	I1218 00:29:08.137400 1305484 command_runner.go:130] >     "Networks": [
	I1218 00:29:08.137405 1305484 command_runner.go:130] >       {
	I1218 00:29:08.137411 1305484 command_runner.go:130] >         "Config": {
	I1218 00:29:08.137420 1305484 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1218 00:29:08.137425 1305484 command_runner.go:130] >           "Name": "cni-loopback",
	I1218 00:29:08.137430 1305484 command_runner.go:130] >           "Plugins": [
	I1218 00:29:08.137433 1305484 command_runner.go:130] >             {
	I1218 00:29:08.137438 1305484 command_runner.go:130] >               "Network": {
	I1218 00:29:08.137442 1305484 command_runner.go:130] >                 "ipam": {},
	I1218 00:29:08.137452 1305484 command_runner.go:130] >                 "type": "loopback"
	I1218 00:29:08.137456 1305484 command_runner.go:130] >               },
	I1218 00:29:08.137463 1305484 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1218 00:29:08.137467 1305484 command_runner.go:130] >             }
	I1218 00:29:08.137470 1305484 command_runner.go:130] >           ],
	I1218 00:29:08.137483 1305484 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1218 00:29:08.137489 1305484 command_runner.go:130] >         },
	I1218 00:29:08.137494 1305484 command_runner.go:130] >         "IFName": "lo"
	I1218 00:29:08.137498 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137503 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137508 1305484 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1218 00:29:08.137515 1305484 command_runner.go:130] >     "PluginDirs": [
	I1218 00:29:08.137519 1305484 command_runner.go:130] >       "/opt/cni/bin"
	I1218 00:29:08.137522 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137526 1305484 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1218 00:29:08.137529 1305484 command_runner.go:130] >     "Prefix": "eth"
	I1218 00:29:08.137533 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137536 1305484 command_runner.go:130] >   "config": {
	I1218 00:29:08.137540 1305484 command_runner.go:130] >     "cdiSpecDirs": [
	I1218 00:29:08.137544 1305484 command_runner.go:130] >       "/etc/cdi",
	I1218 00:29:08.137554 1305484 command_runner.go:130] >       "/var/run/cdi"
	I1218 00:29:08.137569 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137573 1305484 command_runner.go:130] >     "cni": {
	I1218 00:29:08.137576 1305484 command_runner.go:130] >       "binDir": "",
	I1218 00:29:08.137580 1305484 command_runner.go:130] >       "binDirs": [
	I1218 00:29:08.137584 1305484 command_runner.go:130] >         "/opt/cni/bin"
	I1218 00:29:08.137587 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.137591 1305484 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1218 00:29:08.137595 1305484 command_runner.go:130] >       "confTemplate": "",
	I1218 00:29:08.137598 1305484 command_runner.go:130] >       "ipPref": "",
	I1218 00:29:08.137602 1305484 command_runner.go:130] >       "maxConfNum": 1,
	I1218 00:29:08.137606 1305484 command_runner.go:130] >       "setupSerially": false,
	I1218 00:29:08.137610 1305484 command_runner.go:130] >       "useInternalLoopback": false
	I1218 00:29:08.137613 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137620 1305484 command_runner.go:130] >     "containerd": {
	I1218 00:29:08.137627 1305484 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1218 00:29:08.137632 1305484 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1218 00:29:08.137639 1305484 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1218 00:29:08.137645 1305484 command_runner.go:130] >       "runtimes": {
	I1218 00:29:08.137648 1305484 command_runner.go:130] >         "runc": {
	I1218 00:29:08.137654 1305484 command_runner.go:130] >           "ContainerAnnotations": null,
	I1218 00:29:08.137665 1305484 command_runner.go:130] >           "PodAnnotations": null,
	I1218 00:29:08.137670 1305484 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1218 00:29:08.137674 1305484 command_runner.go:130] >           "cgroupWritable": false,
	I1218 00:29:08.137679 1305484 command_runner.go:130] >           "cniConfDir": "",
	I1218 00:29:08.137685 1305484 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1218 00:29:08.137689 1305484 command_runner.go:130] >           "io_type": "",
	I1218 00:29:08.137695 1305484 command_runner.go:130] >           "options": {
	I1218 00:29:08.137699 1305484 command_runner.go:130] >             "BinaryName": "",
	I1218 00:29:08.137703 1305484 command_runner.go:130] >             "CriuImagePath": "",
	I1218 00:29:08.137707 1305484 command_runner.go:130] >             "CriuWorkPath": "",
	I1218 00:29:08.137710 1305484 command_runner.go:130] >             "IoGid": 0,
	I1218 00:29:08.137715 1305484 command_runner.go:130] >             "IoUid": 0,
	I1218 00:29:08.137726 1305484 command_runner.go:130] >             "NoNewKeyring": false,
	I1218 00:29:08.137734 1305484 command_runner.go:130] >             "Root": "",
	I1218 00:29:08.137738 1305484 command_runner.go:130] >             "ShimCgroup": "",
	I1218 00:29:08.137742 1305484 command_runner.go:130] >             "SystemdCgroup": false
	I1218 00:29:08.137746 1305484 command_runner.go:130] >           },
	I1218 00:29:08.137752 1305484 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1218 00:29:08.137761 1305484 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1218 00:29:08.137764 1305484 command_runner.go:130] >           "runtimePath": "",
	I1218 00:29:08.137770 1305484 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1218 00:29:08.137780 1305484 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1218 00:29:08.137784 1305484 command_runner.go:130] >           "snapshotter": ""
	I1218 00:29:08.137787 1305484 command_runner.go:130] >         }
	I1218 00:29:08.137790 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137794 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137804 1305484 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1218 00:29:08.137817 1305484 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1218 00:29:08.137822 1305484 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1218 00:29:08.137828 1305484 command_runner.go:130] >     "disableApparmor": false,
	I1218 00:29:08.137835 1305484 command_runner.go:130] >     "disableHugetlbController": true,
	I1218 00:29:08.137840 1305484 command_runner.go:130] >     "disableProcMount": false,
	I1218 00:29:08.137844 1305484 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1218 00:29:08.137853 1305484 command_runner.go:130] >     "enableCDI": true,
	I1218 00:29:08.137857 1305484 command_runner.go:130] >     "enableSelinux": false,
	I1218 00:29:08.137862 1305484 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1218 00:29:08.137866 1305484 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1218 00:29:08.137871 1305484 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1218 00:29:08.137878 1305484 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1218 00:29:08.137882 1305484 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1218 00:29:08.137887 1305484 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1218 00:29:08.137894 1305484 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1218 00:29:08.137901 1305484 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137906 1305484 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1218 00:29:08.137921 1305484 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137929 1305484 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1218 00:29:08.137940 1305484 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1218 00:29:08.137943 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137947 1305484 command_runner.go:130] >   "features": {
	I1218 00:29:08.137952 1305484 command_runner.go:130] >     "supplemental_groups_policy": true
	I1218 00:29:08.137955 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137962 1305484 command_runner.go:130] >   "golang": "go1.24.9",
	I1218 00:29:08.137972 1305484 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137984 1305484 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137998 1305484 command_runner.go:130] >   "runtimeHandlers": [
	I1218 00:29:08.138001 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138005 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138009 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138019 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138022 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138025 1305484 command_runner.go:130] >     },
	I1218 00:29:08.138028 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138043 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138048 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138053 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138056 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138060 1305484 command_runner.go:130] >       "name": "runc"
	I1218 00:29:08.138065 1305484 command_runner.go:130] >     }
	I1218 00:29:08.138069 1305484 command_runner.go:130] >   ],
	I1218 00:29:08.138074 1305484 command_runner.go:130] >   "status": {
	I1218 00:29:08.138078 1305484 command_runner.go:130] >     "conditions": [
	I1218 00:29:08.138089 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138093 1305484 command_runner.go:130] >         "message": "",
	I1218 00:29:08.138097 1305484 command_runner.go:130] >         "reason": "",
	I1218 00:29:08.138101 1305484 command_runner.go:130] >         "status": true,
	I1218 00:29:08.138112 1305484 command_runner.go:130] >         "type": "RuntimeReady"
	I1218 00:29:08.138115 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138118 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138128 1305484 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1218 00:29:08.138137 1305484 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1218 00:29:08.138140 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138147 1305484 command_runner.go:130] >         "type": "NetworkReady"
	I1218 00:29:08.138150 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138155 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138178 1305484 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1218 00:29:08.138187 1305484 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1218 00:29:08.138192 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138197 1305484 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1218 00:29:08.138203 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138206 1305484 command_runner.go:130] >     ]
	I1218 00:29:08.138209 1305484 command_runner.go:130] >   }
	I1218 00:29:08.138212 1305484 command_runner.go:130] > }
	I1218 00:29:08.140863 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:08.140888 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:08.140910 1305484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:29:08.140937 1305484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:29:08.141052 1305484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:29:08.141124 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:29:08.148733 1305484 command_runner.go:130] > kubeadm
	I1218 00:29:08.148755 1305484 command_runner.go:130] > kubectl
	I1218 00:29:08.148759 1305484 command_runner.go:130] > kubelet
	I1218 00:29:08.149813 1305484 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:29:08.149929 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:29:08.157899 1305484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:29:08.171631 1305484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:29:08.185534 1305484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 00:29:08.199213 1305484 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:29:08.203261 1305484 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1218 00:29:08.203343 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:08.317482 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:08.643734 1305484 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:29:08.643804 1305484 certs.go:195] generating shared ca certs ...
	I1218 00:29:08.643833 1305484 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:08.644029 1305484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:29:08.644119 1305484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:29:08.644145 1305484 certs.go:257] generating profile certs ...
	I1218 00:29:08.644307 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:29:08.644441 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:29:08.644531 1305484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:29:08.644560 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 00:29:08.644603 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 00:29:08.644662 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 00:29:08.644693 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 00:29:08.644737 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 00:29:08.644768 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 00:29:08.644809 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 00:29:08.644841 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 00:29:08.644932 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:29:08.645003 1305484 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:29:08.645041 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:29:08.645094 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:29:08.645151 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:29:08.645217 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:29:08.645309 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:08.645380 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.645420 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.645463 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem -> /usr/share/ca-certificates/1261148.pem
	I1218 00:29:08.646318 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:29:08.666060 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:29:08.685232 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:29:08.704134 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:29:08.723554 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:29:08.741698 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:29:08.759300 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:29:08.777293 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:29:08.794355 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:29:08.812054 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:29:08.830087 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:29:08.847372 1305484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:29:08.860094 1305484 ssh_runner.go:195] Run: openssl version
	I1218 00:29:08.866090 1305484 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1218 00:29:08.866507 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.874034 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:29:08.881757 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885459 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885707 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885773 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.926478 1305484 command_runner.go:130] > 3ec20f2e
	I1218 00:29:08.926977 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:29:08.934462 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.941654 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:29:08.949245 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953111 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953171 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953238 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.993847 1305484 command_runner.go:130] > b5213941
	I1218 00:29:08.994434 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:29:09.002229 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.011682 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:29:09.020345 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025298 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025353 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025405 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.072271 1305484 command_runner.go:130] > 51391683
	I1218 00:29:09.072867 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:29:09.081208 1305484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085518 1305484 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085547 1305484 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1218 00:29:09.085554 1305484 command_runner.go:130] > Device: 259,1	Inode: 2346127     Links: 1
	I1218 00:29:09.085561 1305484 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:09.085576 1305484 command_runner.go:130] > Access: 2025-12-18 00:25:01.733890088 +0000
	I1218 00:29:09.085582 1305484 command_runner.go:130] > Modify: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085594 1305484 command_runner.go:130] > Change: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085606 1305484 command_runner.go:130] >  Birth: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085761 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:29:09.130673 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.131215 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:29:09.179276 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.179949 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:29:09.226958 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.227517 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:29:09.269182 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.269731 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:29:09.310659 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.311193 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:29:09.352162 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.352228 1305484 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:09.352303 1305484 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:29:09.352361 1305484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:29:09.379004 1305484 cri.go:89] found id: ""
	I1218 00:29:09.379101 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:29:09.386224 1305484 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 00:29:09.386247 1305484 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 00:29:09.386254 1305484 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 00:29:09.387165 1305484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:29:09.387182 1305484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:29:09.387261 1305484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:29:09.396523 1305484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:29:09.396996 1305484 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.397115 1305484 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232602" cluster setting kubeconfig missing "functional-232602" context setting]
	I1218 00:29:09.397401 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.397832 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.398029 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.398566 1305484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1218 00:29:09.398586 1305484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1218 00:29:09.398591 1305484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1218 00:29:09.398599 1305484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1218 00:29:09.398604 1305484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1218 00:29:09.398644 1305484 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1218 00:29:09.398857 1305484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:29:09.408050 1305484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1218 00:29:09.408132 1305484 kubeadm.go:602] duration metric: took 20.943322ms to restartPrimaryControlPlane
	I1218 00:29:09.408155 1305484 kubeadm.go:403] duration metric: took 55.931707ms to StartCluster
	I1218 00:29:09.408213 1305484 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.408302 1305484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.409063 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.409379 1305484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 00:29:09.409544 1305484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 00:29:09.409943 1305484 addons.go:70] Setting storage-provisioner=true in profile "functional-232602"
	I1218 00:29:09.409964 1305484 addons.go:239] Setting addon storage-provisioner=true in "functional-232602"
	I1218 00:29:09.409988 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.409637 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:09.410125 1305484 addons.go:70] Setting default-storageclass=true in profile "functional-232602"
	I1218 00:29:09.410148 1305484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232602"
	I1218 00:29:09.410443 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.410469 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.418864 1305484 out.go:179] * Verifying Kubernetes components...
	I1218 00:29:09.421814 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:09.464044 1305484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 00:29:09.465759 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.465914 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.466265 1305484 addons.go:239] Setting addon default-storageclass=true in "functional-232602"
	I1218 00:29:09.466296 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.466740 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.466941 1305484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.466952 1305484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 00:29:09.466995 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.523535 1305484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:09.523562 1305484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 00:29:09.523638 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.539603 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.550039 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.631300 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:09.666484 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.687810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.394630 1305484 node_ready.go:35] waiting up to 6m0s for node "functional-232602" to be "Ready" ...
	I1218 00:29:10.394645 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.394905 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.394947 1305484 retry.go:31] will retry after 177.31527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.395055 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.395073 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395086 1305484 retry.go:31] will retry after 150.104012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395151 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.545905 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.572498 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.615825 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.615864 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.615882 1305484 retry.go:31] will retry after 386.236336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650773 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.650838 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650865 1305484 retry.go:31] will retry after 280.734601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.894991 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.895069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.932808 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.998277 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.998407 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.998429 1305484 retry.go:31] will retry after 660.849815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.003467 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.066495 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.066548 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.066567 1305484 retry.go:31] will retry after 792.514458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.395083 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.659960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:11.722453 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.722493 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.722511 1305484 retry.go:31] will retry after 472.801155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.859919 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.895517 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.895589 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.895884 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.931975 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.936172 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.936234 1305484 retry.go:31] will retry after 583.966469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.195539 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:12.255280 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.259094 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.259131 1305484 retry.go:31] will retry after 926.212833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.395399 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.395475 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.395812 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:12.395919 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:12.520996 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:12.581638 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.581728 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.581762 1305484 retry.go:31] will retry after 1.65494693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.895402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.186032 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:13.243730 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:13.248249 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.248281 1305484 retry.go:31] will retry after 1.192911742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.395563 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.395681 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.395976 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.895848 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.895954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.896330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:14.237854 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:14.298889 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.302600 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.302641 1305484 retry.go:31] will retry after 1.5263786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.395779 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.395871 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.396209 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:14.396293 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:14.441356 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:14.508115 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.508165 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.508184 1305484 retry.go:31] will retry after 3.305911776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.895890 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.896219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.394975 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.395415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.829900 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:15.892510 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:15.892556 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.892574 1305484 retry.go:31] will retry after 3.944012673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.895725 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.895798 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.896127 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.394873 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.394951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.395246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.894968 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.895399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:16.895481 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:17.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:17.814960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:17.873346 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:17.873415 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.873437 1305484 retry.go:31] will retry after 2.287204088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.895511 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.895833 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.395764 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.395845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.396148 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.895440 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:19.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.395328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:19.836815 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:19.891772 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895038 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.895109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.895501 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895520 1305484 retry.go:31] will retry after 2.272181462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.160871 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:20.233754 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:20.233805 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.233824 1305484 retry.go:31] will retry after 9.03130365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.395392 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.395710 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:20.894916 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.894992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:21.395041 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.395135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.395466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:21.395525 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:21.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.895012 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.168810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:22.226105 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:22.229620 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.229649 1305484 retry.go:31] will retry after 6.326012676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.895280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.395383 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.895360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:23.895414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:24.395042 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.395119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:24.895109 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.895188 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.395358 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.395437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.395700 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.895538 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.895612 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.895906 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:25.895954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:26.395465 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.395571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.395892 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:26.895653 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.895735 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.395741 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.395852 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.396210 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.895939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.896273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:27.896328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:28.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:28.556610 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:28.617128 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:28.617182 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.617202 1305484 retry.go:31] will retry after 6.797257953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.895668 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.895975 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.265354 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:29.327180 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:29.327227 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.327246 1305484 retry.go:31] will retry after 10.081474738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.395481 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.395821 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.895626 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.895701 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:30.395476 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.395558 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.395870 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:30.395928 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:30.895674 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.895771 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.896102 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.395677 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.395765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.396042 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.895800 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.895892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.896225 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:32.395871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.395946 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.396238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:32.396286 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:32.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.894971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.895221 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.894995 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.895096 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.895485 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:34.895540 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:35.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.395369 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:35.415065 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:35.470618 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:35.474707 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.474739 1305484 retry.go:31] will retry after 12.346765183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.894884 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.895217 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.895297 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:37.395715 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.395786 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.396036 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:37.396085 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:37.895882 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.895957 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.896282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.395072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.395404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.395085 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.395413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.409781 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:39.473091 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:39.473144 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.473164 1305484 retry.go:31] will retry after 18.475103934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.895826 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.896182 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:39.896239 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:40.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.394986 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:40.894982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.895057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.395197 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.395487 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.894953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:42.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.395341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:42.395398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:42.895053 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.394921 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.894994 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.895439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:44.395145 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:44.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:44.895223 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.895291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.395338 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.396157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.395277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:46.895417 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:47.395091 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.395170 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:47.821776 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:47.880326 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:47.883900 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.883932 1305484 retry.go:31] will retry after 18.240859758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.895204 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.895522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.895186 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.895530 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:48.895589 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:49.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:49.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.395307 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.395385 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.395702 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.895512 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.895597 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.895908 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:50.895965 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:51.395762 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.395833 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.396181 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:51.894896 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.895266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.395005 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.395321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:53.395068 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.395156 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.395497 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:53.395555 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:53.894871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.895228 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.395496 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.395573 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.895684 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.895759 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.896113 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.394869 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.394953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.395245 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.895404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:55.895459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:56.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.395302 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:56.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.895034 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.948848 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:58.011608 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:58.015264 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.015303 1305484 retry.go:31] will retry after 17.396243449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.394927 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.395242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:58.395294 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:58.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.395011 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.894993 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:00.395507 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.395593 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.395898 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:00.395950 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:00.894850 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.894938 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.394969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.395325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.895392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.395062 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.395142 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.395460 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:02.895401 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:03.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.395392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:03.894963 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.895041 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.895359 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.394991 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.894956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:05.395299 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.395380 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.395678 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:05.395727 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:05.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.125881 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:06.190863 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:06.190916 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.190936 1305484 retry.go:31] will retry after 24.931144034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.395236 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.395314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.395677 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.895467 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.895550 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.895878 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:07.395628 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.395697 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.395955 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:07.395997 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:07.895729 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.895808 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.896074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.395873 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.395948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.396313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.894868 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.895208 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.895287 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.895612 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:09.895672 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:10.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.395353 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.395606 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:10.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.394972 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.395053 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.894959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.895219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:12.394924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:12.395391 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:12.894984 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.894985 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.895388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.395015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.395307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.894872 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.894951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.895211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:14.895252 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:15.395260 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.395713 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:15.411948 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:15.467885 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:15.471996 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.472026 1305484 retry.go:31] will retry after 23.671964263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.895665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.895991 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.395769 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.395850 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.396115 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.894852 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.894935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.895261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:16.895324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:17.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.394932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:17.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.895123 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.895201 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.895524 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:18.895581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:19.395822 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.395905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.396165 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:19.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.395230 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.395313 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.395645 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.895295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:21.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:21.395450 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:21.895115 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.895196 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.895514 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.394905 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.895345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.395045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.895065 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:23.895477 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:24.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.394989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.395346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:24.895052 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.895137 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.895433 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.395347 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.395641 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.895358 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.895437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.895746 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:25.895805 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:26.395602 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.395686 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.396014 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:26.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.895844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.896146 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.394944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.894978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.895055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.895365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:28.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.395236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:28.395276 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:28.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.395540 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.395625 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.395953 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.894937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.895190 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:30.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.395255 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.395559 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:30.395614 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:30.895284 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.895370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.895692 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.123262 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:31.181409 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.184938 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.185056 1305484 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:31.395353 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.395427 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.395686 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.895522 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.895971 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:32.395780 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.395853 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.396133 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:32.396184 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:32.894842 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.894921 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.895187 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.394857 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.394937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.894971 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.895325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:34.895435 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:35.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.395839 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:35.895696 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.895778 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.896070 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.395851 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.395932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.396284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.895427 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:36.895485 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:37.395134 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.395209 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.395462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:37.894924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.895330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:39.144879 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:39.206506 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206561 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206652 1305484 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:39.209780 1305484 out.go:179] * Enabled addons: 
	I1218 00:30:39.213292 1305484 addons.go:530] duration metric: took 1m29.803748848s for enable addons: enabled=[]
	I1218 00:30:39.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:39.395343 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:39.895241 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.895315 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.895674 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.395346 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.395421 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.395699 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.895493 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.895927 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:41.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.395901 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.396304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:41.396363 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:41.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.895079 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.895335 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.394978 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.395300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:43.895429 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:44.395104 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.395180 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.395503 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:44.894907 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.894987 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.895277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.394949 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:46.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.395018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:46.395324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:46.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.895453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:48.394988 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.395066 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:48.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:48.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.895329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.394998 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:50.395234 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.395312 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.395669 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:50.395726 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:50.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.895541 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.895800 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.395565 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.395643 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.895820 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.896139 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:52.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.395866 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:52.396147 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:52.894845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.894930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.895239 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.895246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.395001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.895019 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.895132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.895462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:54.895517 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:55.395382 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.395459 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.395747 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:55.895567 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.896004 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.395794 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.395876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.396202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.894918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.895248 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:57.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:57.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:57.895089 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.895163 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.895506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.395467 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.895216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:59.895259 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:00.395510 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.395606 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.395915 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:00.895683 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.895763 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.896072 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.395863 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.395942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.396196 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.894969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:01.895364 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:02.395506 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.395587 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.395926 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:02.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.895787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.394835 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.394918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.395241 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:03.895409 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:04.394887 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.395203 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:04.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.895585 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.395452 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.395534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.895595 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.895675 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.895945 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:05.895986 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:06.395824 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.395899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.396242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:06.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.395035 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.395109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.894960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:08.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.395097 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.395422 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:08.395475 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:08.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.895185 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.895437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.394963 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.395425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.894995 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:10.395562 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:10.895006 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.895092 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.895441 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.395247 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.395326 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.395703 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.895773 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.895839 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:12.395833 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.395908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.396246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:12.396315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:12.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.894941 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.895004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.895326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.394884 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.395283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.894810 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.894876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.895171 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:14.895233 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:15.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.395266 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.395614 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:15.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.895319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.394906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.395230 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:16.895449 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:17.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.395260 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.395607 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:17.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.895160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.895445 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.895357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:19.395005 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:19.395376 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:19.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.395282 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.395364 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.395694 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.895475 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.895552 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.895809 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:21.395604 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.395678 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.395990 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:21.396041 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:21.895659 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.895733 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.896015 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.395655 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.395728 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.395992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.895435 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.895515 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.895848 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:23.395649 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.395732 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:23.396134 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:23.895883 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.895960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.896252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.894847 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.895271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.395154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.395412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.895068 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.895475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:25.895531 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:26.395075 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.395488 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:26.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.895250 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.395377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.895371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:28.395072 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:28.395459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:28.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.895034 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.395100 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.395520 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.894938 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:30.395237 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.395365 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.395704 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:30.395760 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:30.895519 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.895940 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.395676 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.395750 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.396048 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.895809 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.895895 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.896244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.394971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.894900 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:32.895326 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:33.394994 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.395070 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.395437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:33.895135 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.895535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.395882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.395954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.396208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:34.895368 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:35.395101 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:35.895173 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.895249 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.895577 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.394992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.395327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.895323 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:37.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.395252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:37.395302 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:37.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.895332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.895059 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.895134 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:39.394962 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.395049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.395388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:39.395443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:39.895187 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.895635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.395589 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.895352 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.395047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.895073 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.895149 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.895412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:41.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:42.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:42.895106 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.895183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.895531 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.394891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.895424 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:43.895479 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:44.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.395368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:44.895047 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.895117 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.895407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.395328 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.395422 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.395783 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.895608 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.895699 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.896131 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:45.896187 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:46.394880 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:46.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.895051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.395116 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.395191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.395557 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.894966 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.895047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:48.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:48.395424 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:48.895132 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.895327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:50.395224 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.395303 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:50.395707 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:50.895406 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.895483 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.395554 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.395639 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.395931 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.895695 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.895768 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:52.395729 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.395811 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.396079 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:52.396127 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:52.895894 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.895969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.896306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.395050 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.395150 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.895062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.895316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.395011 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.895320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:54.895366 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:55.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.395291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.395575 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:55.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.895409 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.394936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.895032 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.895105 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:56.895458 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:57.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:57.895074 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.895154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.895479 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.394862 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.395279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.894867 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.895307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:59.394852 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.394934 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:59.395339 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:59.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.895849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.896110 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.395197 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.395298 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.395737 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.895502 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.895905 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:01.395709 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.395787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.396061 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:01.396105 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:01.895861 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.895937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.896281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.894996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.895072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.895410 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:03.895469 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:04.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.395044 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.395298 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:04.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.395270 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.395588 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.895256 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.895330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.895578 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:05.895621 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:06.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:06.894980 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.895071 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.895448 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:08.394964 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.395043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.395361 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:08.395415 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:08.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.895046 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.895131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.895449 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:10.395311 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.395381 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.395635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:10.395676 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:10.895273 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.895354 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.895754 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.395292 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.395374 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.395675 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.895376 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.895441 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.895684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:12.395437 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.395517 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.395849 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:12.395904 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:12.895550 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.895627 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.895939 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.395711 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.395791 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.895885 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.895958 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.896301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.394930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.395206 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.894945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.895220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:14.895266 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:15.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.395306 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.395672 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:15.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.895349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.395013 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.895366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:16.895425 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:17.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.895274 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.895192 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:18.895550 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:19.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.395195 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:19.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.895024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.895370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.395299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.395647 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.894993 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.895294 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:21.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.395303 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:21.395350 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:21.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.895043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.394838 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.394910 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.395188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:23.395047 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.395131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.395465 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:23.395520 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:23.894892 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.894964 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.895362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:25.395258 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.395335 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.395602 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:25.395653 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:25.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.895416 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.395052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.895075 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.895415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.394948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.895356 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:27.895426 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:28.395097 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.395171 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.395489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:28.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.895036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.395111 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.395193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.895559 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.895634 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.895935 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:29.895990 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:30.395759 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.395836 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.396159 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:30.894851 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.894931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.395281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:32.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.395132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:32.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:32.894897 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.895317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.395060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.895211 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.895286 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.895620 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.394801 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.394869 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.395114 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.894830 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.894907 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.895223 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:34.895273 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:35.395130 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:35.895126 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.895205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:36.895398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:37.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.394969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.395292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:37.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.394915 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.394990 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.895094 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.895411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:38.895465 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:39.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:39.895143 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.895225 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.395286 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.395370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.395636 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:41.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:41.395439 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:41.894881 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.894976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.394999 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.395081 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.395442 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.895025 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.895106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.895432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.394888 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.394966 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.395216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:43.895348 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:44.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:44.894831 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.894908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.895175 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.395389 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.395497 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.395880 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.895561 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.895646 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.895997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:45.896056 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:46.395702 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.395785 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.396046 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:46.895863 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.895935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.896257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.395439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:48.395027 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:48.395498 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:48.895164 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.895243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.895582 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.395264 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.395597 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.895474 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.895557 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:50.395724 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.395800 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.396111 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:50.396169 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:50.895876 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.895947 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.896202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.395401 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.895200 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.895548 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.395025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:52.895410 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:53.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.395162 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:53.895715 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.895783 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.896041 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.395464 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.395544 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.395863 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.895501 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:54.895971 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:55.395850 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.395924 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.396188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:55.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.895296 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.395115 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.395513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.895193 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.895259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.895583 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:57.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.395358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:57.395413 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:57.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.395771 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.395843 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.396103 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.895868 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.895950 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.896279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.394988 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.395315 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.895060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:59.895473 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:00.395531 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.395633 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:00.894904 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.895313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.394991 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.395320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.895358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:02.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.395021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.395373 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:02.395430 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:02.895092 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.895164 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.395411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.895093 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.394889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.395259 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.894989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:04.895395 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:05.395163 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.395243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.395682 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:05.895450 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.895524 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.895784 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.395568 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.395656 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.395978 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.895794 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.895874 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.896211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:06.896271 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:07.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:07.894962 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.895397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.394973 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.395407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.895172 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.895469 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:09.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:09.395444 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:09.895137 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.895212 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.895526 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.395259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.395579 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.895391 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.895474 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.895867 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:11.395660 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.395744 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.396081 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:11.396140 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:11.895822 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.895896 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.896157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.394896 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.394973 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.395034 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.395107 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.895385 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:13.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:14.395141 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.395215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:14.895214 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.895295 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.895592 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.395316 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.395398 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.395758 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.895576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.895992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:15.896047 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:16.395754 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.396096 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:16.895867 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.895943 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.896286 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.395428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.895235 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:18.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.395037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:18.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:18.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.395272 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.895438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:20.395201 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.395308 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.395646 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:20.395698 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:20.895422 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.895490 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.395521 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.395598 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.395947 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.895610 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.895689 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.896027 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:22.395778 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.395849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.396108 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:22.396151 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:22.894879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.894954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.895254 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.895018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.395023 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.395432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:24.895433 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:25.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:25.895136 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.895539 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.395250 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.395706 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.895534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.895793 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:26.895834 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:27.395582 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.395665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.396005 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:27.895686 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.895765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.896121 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.395755 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.396080 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.895931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.896264 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:28.896319 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:29.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.395342 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:29.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.895400 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.395313 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.395390 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.395741 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.895528 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.895610 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.895946 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:31.395576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.395644 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.395889 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:31.395930 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:31.895675 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.895753 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.896082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.394834 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.894964 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.895091 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.895177 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:33.895563 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:34.394882 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.394955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:34.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.395153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.894873 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.895257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:36.394950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.395348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:36.395402 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:36.895071 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.895476 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.395268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.395002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.895305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:38.895353 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:39.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:39.895212 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.895299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.895609 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.395293 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.395361 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.395613 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.895328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:40.895383 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:41.395069 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.395147 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.395453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:41.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.895138 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.895215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.895542 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:42.895601 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:43.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:43.895604 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.895677 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.395290 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.395367 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.395718 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.895507 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.895582 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.895842 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:44.895892 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:45.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:45.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.395070 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.395160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.395494 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.894943 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.895019 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:47.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.395069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.395419 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:47.395483 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:47.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.894965 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.895236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.394934 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.395366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.895481 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:49.395814 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.395888 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.396152 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:49.396201 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:49.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.395242 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.395323 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.395662 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.894942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.895212 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.895127 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.895213 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.895688 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:51.895762 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:52.395524 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.395609 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.395929 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:52.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.895845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.896160 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.395295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.894861 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:54.394811 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.394887 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.395224 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:54.395284 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:54.895871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.895944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.896276 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.395236 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.895000 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.895285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:56.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:56.395441 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:56.895820 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.895899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.896155 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.894987 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.895413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:58.395076 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.395146 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.395477 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:58.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:58.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.395049 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.395125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.894984 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:00.395314 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.395415 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.395786 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:00.395854 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:00.895591 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.895666 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.896029 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.395664 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.395997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.895814 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.895904 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.896249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.394968 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.395421 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.895193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.895464 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:02.895507 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:03.395162 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.395245 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.395584 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:03.895306 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.895387 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.395125 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.395233 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.395547 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.895240 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.895314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.895659 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:04.895713 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:05.395523 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.395602 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:05.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.895784 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.896083 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.395846 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.395920 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.396255 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.894862 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:07.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.395319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:07.395361 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:07.895013 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.895141 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.895473 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.395190 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.395601 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.895088 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.895159 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:09.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.395397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:09.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:09.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.395240 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.395490 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.895174 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.895254 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:11.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.395429 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:11.395490 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:11.895021 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.895089 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.395645 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.395720 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.396082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.895753 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.895830 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.896143 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.394854 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.394925 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.395193 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.895010 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.895299 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:13.895347 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:14.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:14.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.895129 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.395317 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.395394 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.395684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.895487 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.895571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.895903 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:15.895957 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:16.395670 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.395998 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:16.895851 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.895945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.896285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.395074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.895249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:18.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.395317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:18.395371 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:18.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.895376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.394872 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.895389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:20.395179 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.395604 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:20.395662 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:20.894898 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.895244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.395016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.894952 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.394923 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.395310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:22.895406 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:23.395099 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.395522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:23.895196 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.895267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.394919 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.394997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.395328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.895049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:24.895443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:25.395131 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.395205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.395456 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:25.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.895301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:27.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.395004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:27.395386 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:27.895785 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.895857 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.896201 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.394885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.395288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:29.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.395527 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:29.395588 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:29.894812 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.894881 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.895140 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.395146 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.395230 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.395562 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.894965 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.895039 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.895125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.895444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:31.895519 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:32.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:32.895139 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.895468 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.394926 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.895321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:34.394900 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.394970 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.395227 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:34.395268 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:34.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.395150 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.395242 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.395581 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.895262 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.895333 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.895655 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:36.395446 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.395526 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.395891 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:36.395954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:36.895879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.896025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.896489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.395256 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.395590 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.395094 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.395175 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:38.895318 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:39.394981 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:39.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.395255 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.395330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.395611 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.895417 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.895495 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.895856 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:40.895911 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:41.395671 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.395749 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.396075 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:41.895770 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.895842 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.394861 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.394945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:43.394987 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.395349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:43.395397 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:43.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.895331 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.395167 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.395534 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.895001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:45.395381 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.395465 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.395835 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:45.395899 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:45.895622 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.895696 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.896010 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.395697 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.395815 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.396068 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.895828 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.895903 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.896238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.394829 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.394914 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.395208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.894909 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.895256 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:47.895315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:48.394935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.395013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.895191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.395252 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.395319 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.395570 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.895468 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.895542 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.895868 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:49.895924 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:50.395784 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.395860 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.396189 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:50.895823 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.895905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.896170 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.394877 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.394954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.395305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.895290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:52.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.394961 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.395282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:52.395333 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:52.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.895119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.895493 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.395297 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.395619 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.894885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.894963 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.895214 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:54.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.395306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:54.395365 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:54.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.395135 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.395210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:56.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.395029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:56.395422 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:56.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.895133 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.895056 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.895135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.895491 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:58.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.395253 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.395564 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:58.395616 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:58.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.895017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.395042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.894955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.895253 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:00.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.395351 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:00.395696 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:00.895585 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.895660 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.895999 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.395773 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.395844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.396106 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.895887 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.895974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.896290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.394993 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.395076 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.395438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.895141 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.895226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.895545 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:02.895597 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.395370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:03.895085 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.895169 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.895513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.395827 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.395892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.396191 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:05.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.395239 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:05.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:05.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.895226 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.395376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.395122 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.395495 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:07.895403 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:08.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:08.895048 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.895123 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.895471 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.395187 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.395657 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.895568 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.895676 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.896021 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:09.896082 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:10.395155 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:10.395216 1305484 node_ready.go:38] duration metric: took 6m0.000503053s for node "functional-232602" to be "Ready" ...
	I1218 00:35:10.402744 1305484 out.go:203] 
	W1218 00:35:10.405748 1305484 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 00:35:10.405971 1305484 out.go:285] * 
	* 
	W1218 00:35:10.408384 1305484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:35:10.411337 1305484 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-232602 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.673462148s for "functional-232602" cluster.
I1218 00:35:10.867156 1261148 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (363.736244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464                     │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh -- ls -la /mount-9p                                                                                                             │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh sudo umount -f /mount-9p                                                                                                        │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount3 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount2 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount1                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount1 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount1                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh findmnt -T /mount2                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh findmnt -T /mount3                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ mount          │ -p functional-739047 --kill=true                                                                                                                      │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format short --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image          │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete         │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start          │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ start          │ -p functional-232602 --alsologtostderr -v=8                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:29:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:29:05.243654 1305484 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:29:05.243837 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.243867 1305484 out.go:374] Setting ErrFile to fd 2...
	I1218 00:29:05.243888 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.244277 1305484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:29:05.244868 1305484 out.go:368] Setting JSON to false
	I1218 00:29:05.245808 1305484 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25892,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:29:05.245939 1305484 start.go:143] virtualization:  
	I1218 00:29:05.249423 1305484 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:29:05.253059 1305484 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:29:05.253187 1305484 notify.go:221] Checking for updates...
	I1218 00:29:05.259241 1305484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:29:05.262171 1305484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:05.265173 1305484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:29:05.268135 1305484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:29:05.270950 1305484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:29:05.274293 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:05.274440 1305484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:29:05.308275 1305484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:29:05.308407 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.375725 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.366230286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.375834 1305484 docker.go:319] overlay module found
	I1218 00:29:05.378939 1305484 out.go:179] * Using the docker driver based on existing profile
	I1218 00:29:05.381619 1305484 start.go:309] selected driver: docker
	I1218 00:29:05.381657 1305484 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.381752 1305484 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:29:05.381892 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.440724 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.431205912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.441147 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:05.441215 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:05.441270 1305484 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.444475 1305484 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:29:05.447488 1305484 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:29:05.450519 1305484 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:29:05.453580 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:05.453631 1305484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:29:05.453641 1305484 cache.go:65] Caching tarball of preloaded images
	I1218 00:29:05.453681 1305484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:29:05.453745 1305484 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:29:05.453756 1305484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:29:05.453862 1305484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:29:05.474116 1305484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:29:05.474140 1305484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:29:05.474160 1305484 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:29:05.474205 1305484 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:29:05.474271 1305484 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "functional-232602"
	I1218 00:29:05.474294 1305484 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:29:05.474305 1305484 fix.go:54] fixHost starting: 
	I1218 00:29:05.474585 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:05.494473 1305484 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:29:05.494511 1305484 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:29:05.497625 1305484 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:29:05.497657 1305484 machine.go:94] provisionDockerMachine start ...
	I1218 00:29:05.497756 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.514682 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.515020 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.515044 1305484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:29:05.668376 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.668400 1305484 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:29:05.668465 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.700140 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.700482 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.700495 1305484 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:29:05.865944 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.866034 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.884487 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.884983 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.885010 1305484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:29:06.041516 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:29:06.041541 1305484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:29:06.041561 1305484 ubuntu.go:190] setting up certificates
	I1218 00:29:06.041572 1305484 provision.go:84] configureAuth start
	I1218 00:29:06.041652 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.060898 1305484 provision.go:143] copyHostCerts
	I1218 00:29:06.060951 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.060994 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:29:06.061002 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.061080 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:29:06.061163 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061182 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:29:06.061187 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061215 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:29:06.061256 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061273 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:29:06.061277 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061301 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:29:06.061349 1305484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:29:06.177802 1305484 provision.go:177] copyRemoteCerts
	I1218 00:29:06.177898 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:29:06.177967 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.195440 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.308765 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 00:29:06.308835 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:29:06.326972 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 00:29:06.327095 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:29:06.345137 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 00:29:06.345225 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:29:06.363588 1305484 provision.go:87] duration metric: took 321.991809ms to configureAuth
	I1218 00:29:06.363617 1305484 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:29:06.363812 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:06.363826 1305484 machine.go:97] duration metric: took 866.163062ms to provisionDockerMachine
	I1218 00:29:06.363833 1305484 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:29:06.363845 1305484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:29:06.363904 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:29:06.363949 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.381445 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.493044 1305484 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:29:06.496574 1305484 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1218 00:29:06.496595 1305484 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1218 00:29:06.496599 1305484 command_runner.go:130] > VERSION_ID="12"
	I1218 00:29:06.496604 1305484 command_runner.go:130] > VERSION="12 (bookworm)"
	I1218 00:29:06.496612 1305484 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1218 00:29:06.496615 1305484 command_runner.go:130] > ID=debian
	I1218 00:29:06.496641 1305484 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1218 00:29:06.496649 1305484 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1218 00:29:06.496655 1305484 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1218 00:29:06.496744 1305484 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:29:06.496762 1305484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:29:06.496773 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:29:06.496837 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:29:06.496920 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:29:06.496932 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /etc/ssl/certs/12611482.pem
	I1218 00:29:06.497013 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:29:06.497022 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> /etc/test/nested/copy/1261148/hosts
	I1218 00:29:06.497083 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:29:06.504772 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:06.523736 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:29:06.542759 1305484 start.go:296] duration metric: took 178.908993ms for postStartSetup
	I1218 00:29:06.542856 1305484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:29:06.542901 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.560753 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.665778 1305484 command_runner.go:130] > 18%
	I1218 00:29:06.665854 1305484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:29:06.671095 1305484 command_runner.go:130] > 160G
	I1218 00:29:06.671651 1305484 fix.go:56] duration metric: took 1.19734099s for fixHost
	I1218 00:29:06.671671 1305484 start.go:83] releasing machines lock for "functional-232602", held for 1.197387766s
	I1218 00:29:06.671738 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.688941 1305484 ssh_runner.go:195] Run: cat /version.json
	I1218 00:29:06.689003 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.689377 1305484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:29:06.689435 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.710307 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.721003 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.812429 1305484 command_runner.go:130] > {"iso_version": "v1.37.0-1765846775-22141", "kicbase_version": "v0.0.48-1765966054-22186", "minikube_version": "v1.37.0", "commit": "c344550999bcbb78f38b2df057224788bb2d30b2"}
	I1218 00:29:06.812585 1305484 ssh_runner.go:195] Run: systemctl --version
	I1218 00:29:06.910410 1305484 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 00:29:06.913301 1305484 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1218 00:29:06.913347 1305484 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1218 00:29:06.913421 1305484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 00:29:06.917811 1305484 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 00:29:06.917849 1305484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:29:06.917931 1305484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:29:06.925837 1305484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:29:06.925861 1305484 start.go:496] detecting cgroup driver to use...
	I1218 00:29:06.925891 1305484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:29:06.925936 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:29:06.941416 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:29:06.954870 1305484 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:29:06.954953 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:29:06.971407 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:29:06.985680 1305484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:29:07.097075 1305484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:29:07.240817 1305484 docker.go:234] disabling docker service ...
	I1218 00:29:07.240965 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:29:07.256804 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:29:07.271026 1305484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:29:07.407005 1305484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:29:07.534286 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:29:07.548592 1305484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:29:07.562819 1305484 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 00:29:07.564071 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:29:07.574541 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:29:07.583515 1305484 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:29:07.583615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:29:07.592330 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.601414 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:29:07.610399 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.619445 1305484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:29:07.627615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:29:07.637099 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:29:07.646771 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:29:07.656000 1305484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:29:07.663026 1305484 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 00:29:07.664029 1305484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:29:07.671707 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:07.789368 1305484 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:29:07.948156 1305484 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:29:07.948230 1305484 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:29:07.952108 1305484 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1218 00:29:07.952130 1305484 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 00:29:07.952136 1305484 command_runner.go:130] > Device: 0,72	Inode: 1611        Links: 1
	I1218 00:29:07.952144 1305484 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:07.952150 1305484 command_runner.go:130] > Access: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952154 1305484 command_runner.go:130] > Modify: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952160 1305484 command_runner.go:130] > Change: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952164 1305484 command_runner.go:130] >  Birth: -
	I1218 00:29:07.952461 1305484 start.go:564] Will wait 60s for crictl version
	I1218 00:29:07.952520 1305484 ssh_runner.go:195] Run: which crictl
	I1218 00:29:07.958389 1305484 command_runner.go:130] > /usr/local/bin/crictl
	I1218 00:29:07.959041 1305484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:29:07.980682 1305484 command_runner.go:130] > Version:  0.1.0
	I1218 00:29:07.980702 1305484 command_runner.go:130] > RuntimeName:  containerd
	I1218 00:29:07.980709 1305484 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1218 00:29:07.980714 1305484 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 00:29:07.982988 1305484 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:29:07.983059 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.002890 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.002977 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.027238 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.034949 1305484 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:29:08.037919 1305484 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:29:08.055210 1305484 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:29:08.059294 1305484 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1218 00:29:08.059421 1305484 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:29:08.059535 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:08.059617 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.084496 1305484 command_runner.go:130] > {
	I1218 00:29:08.084519 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.084525 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084534 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.084540 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084546 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.084550 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084554 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084566 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.084574 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084578 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.084582 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084589 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084593 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084596 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084609 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.084616 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084642 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.084646 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084651 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084659 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.084666 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084671 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.084678 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084682 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084686 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084689 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084696 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.084705 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084716 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.084722 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084731 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084739 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.084751 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084756 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.084760 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.084764 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084768 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084777 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084786 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.084791 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084802 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.084805 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084810 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084818 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.084824 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084829 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.084835 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084839 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084851 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084855 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084860 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084863 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084868 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084876 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.084883 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084888 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.084892 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084896 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084905 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.084917 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084922 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.084929 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084943 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084946 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084957 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084961 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084965 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084968 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084975 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.084983 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084991 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.084998 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085003 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085019 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.085026 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085033 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.085037 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085041 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085044 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085050 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085054 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085057 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085060 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085067 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.085073 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085078 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.085084 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085088 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085106 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.085110 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085114 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.085124 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085128 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085132 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085138 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085148 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.085153 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085160 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.085166 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085170 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085182 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.085191 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085195 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.085199 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085203 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085206 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085224 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085228 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085231 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085235 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085244 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.085252 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085258 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.085264 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085270 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085278 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.085287 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085291 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.085296 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085300 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.085306 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085313 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085317 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.085320 1305484 command_runner.go:130] >     }
	I1218 00:29:08.085323 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.085325 1305484 command_runner.go:130] > }
	I1218 00:29:08.087939 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.087964 1305484 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:29:08.088036 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.111236 1305484 command_runner.go:130] > {
	I1218 00:29:08.111264 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.111269 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111279 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.111286 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111295 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.111298 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111302 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111311 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.111318 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111322 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.111330 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111334 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111337 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111340 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111347 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.111352 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111358 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.111364 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111368 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111379 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.111391 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111396 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.111400 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111404 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111407 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111410 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111417 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.111421 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111426 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.111429 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111437 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111447 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.111454 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111462 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.111467 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.111475 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111478 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111483 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111491 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.111499 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111504 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.111507 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111511 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111519 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.111522 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111527 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.111533 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111537 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111543 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111547 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111559 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111562 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111565 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111573 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.111580 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111585 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.111588 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111592 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111600 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.111606 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111611 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.111617 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111626 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111632 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111635 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111639 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111646 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111652 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111659 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.111662 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111668 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.111671 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111676 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111690 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.111697 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111701 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.111707 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111711 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111716 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111720 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111739 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111742 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111746 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111755 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.111759 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111768 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.111771 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111775 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111785 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.111798 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111802 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.111805 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111809 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111813 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111816 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111825 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.111835 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111840 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.111843 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111855 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111866 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.111872 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111876 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.111880 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111884 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111889 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111893 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111899 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111903 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111913 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111921 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.111925 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111929 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.111933 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111937 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111947 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.111959 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111963 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.111967 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111971 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.111978 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111982 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111989 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.111992 1305484 command_runner.go:130] >     }
	I1218 00:29:08.112001 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.112004 1305484 command_runner.go:130] > }
	I1218 00:29:08.114369 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.114392 1305484 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:29:08.114401 1305484 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:29:08.114566 1305484 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:29:08.114639 1305484 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:29:08.137373 1305484 command_runner.go:130] > {
	I1218 00:29:08.137395 1305484 command_runner.go:130] >   "cniconfig": {
	I1218 00:29:08.137400 1305484 command_runner.go:130] >     "Networks": [
	I1218 00:29:08.137405 1305484 command_runner.go:130] >       {
	I1218 00:29:08.137411 1305484 command_runner.go:130] >         "Config": {
	I1218 00:29:08.137420 1305484 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1218 00:29:08.137425 1305484 command_runner.go:130] >           "Name": "cni-loopback",
	I1218 00:29:08.137430 1305484 command_runner.go:130] >           "Plugins": [
	I1218 00:29:08.137433 1305484 command_runner.go:130] >             {
	I1218 00:29:08.137438 1305484 command_runner.go:130] >               "Network": {
	I1218 00:29:08.137442 1305484 command_runner.go:130] >                 "ipam": {},
	I1218 00:29:08.137452 1305484 command_runner.go:130] >                 "type": "loopback"
	I1218 00:29:08.137456 1305484 command_runner.go:130] >               },
	I1218 00:29:08.137463 1305484 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1218 00:29:08.137467 1305484 command_runner.go:130] >             }
	I1218 00:29:08.137470 1305484 command_runner.go:130] >           ],
	I1218 00:29:08.137483 1305484 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1218 00:29:08.137489 1305484 command_runner.go:130] >         },
	I1218 00:29:08.137494 1305484 command_runner.go:130] >         "IFName": "lo"
	I1218 00:29:08.137498 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137503 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137508 1305484 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1218 00:29:08.137515 1305484 command_runner.go:130] >     "PluginDirs": [
	I1218 00:29:08.137519 1305484 command_runner.go:130] >       "/opt/cni/bin"
	I1218 00:29:08.137522 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137526 1305484 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1218 00:29:08.137529 1305484 command_runner.go:130] >     "Prefix": "eth"
	I1218 00:29:08.137533 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137536 1305484 command_runner.go:130] >   "config": {
	I1218 00:29:08.137540 1305484 command_runner.go:130] >     "cdiSpecDirs": [
	I1218 00:29:08.137544 1305484 command_runner.go:130] >       "/etc/cdi",
	I1218 00:29:08.137554 1305484 command_runner.go:130] >       "/var/run/cdi"
	I1218 00:29:08.137569 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137573 1305484 command_runner.go:130] >     "cni": {
	I1218 00:29:08.137576 1305484 command_runner.go:130] >       "binDir": "",
	I1218 00:29:08.137580 1305484 command_runner.go:130] >       "binDirs": [
	I1218 00:29:08.137584 1305484 command_runner.go:130] >         "/opt/cni/bin"
	I1218 00:29:08.137587 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.137591 1305484 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1218 00:29:08.137595 1305484 command_runner.go:130] >       "confTemplate": "",
	I1218 00:29:08.137598 1305484 command_runner.go:130] >       "ipPref": "",
	I1218 00:29:08.137602 1305484 command_runner.go:130] >       "maxConfNum": 1,
	I1218 00:29:08.137606 1305484 command_runner.go:130] >       "setupSerially": false,
	I1218 00:29:08.137610 1305484 command_runner.go:130] >       "useInternalLoopback": false
	I1218 00:29:08.137613 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137620 1305484 command_runner.go:130] >     "containerd": {
	I1218 00:29:08.137627 1305484 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1218 00:29:08.137632 1305484 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1218 00:29:08.137639 1305484 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1218 00:29:08.137645 1305484 command_runner.go:130] >       "runtimes": {
	I1218 00:29:08.137648 1305484 command_runner.go:130] >         "runc": {
	I1218 00:29:08.137654 1305484 command_runner.go:130] >           "ContainerAnnotations": null,
	I1218 00:29:08.137665 1305484 command_runner.go:130] >           "PodAnnotations": null,
	I1218 00:29:08.137670 1305484 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1218 00:29:08.137674 1305484 command_runner.go:130] >           "cgroupWritable": false,
	I1218 00:29:08.137679 1305484 command_runner.go:130] >           "cniConfDir": "",
	I1218 00:29:08.137685 1305484 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1218 00:29:08.137689 1305484 command_runner.go:130] >           "io_type": "",
	I1218 00:29:08.137695 1305484 command_runner.go:130] >           "options": {
	I1218 00:29:08.137699 1305484 command_runner.go:130] >             "BinaryName": "",
	I1218 00:29:08.137703 1305484 command_runner.go:130] >             "CriuImagePath": "",
	I1218 00:29:08.137707 1305484 command_runner.go:130] >             "CriuWorkPath": "",
	I1218 00:29:08.137710 1305484 command_runner.go:130] >             "IoGid": 0,
	I1218 00:29:08.137715 1305484 command_runner.go:130] >             "IoUid": 0,
	I1218 00:29:08.137726 1305484 command_runner.go:130] >             "NoNewKeyring": false,
	I1218 00:29:08.137734 1305484 command_runner.go:130] >             "Root": "",
	I1218 00:29:08.137738 1305484 command_runner.go:130] >             "ShimCgroup": "",
	I1218 00:29:08.137742 1305484 command_runner.go:130] >             "SystemdCgroup": false
	I1218 00:29:08.137746 1305484 command_runner.go:130] >           },
	I1218 00:29:08.137752 1305484 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1218 00:29:08.137761 1305484 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1218 00:29:08.137764 1305484 command_runner.go:130] >           "runtimePath": "",
	I1218 00:29:08.137770 1305484 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1218 00:29:08.137780 1305484 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1218 00:29:08.137784 1305484 command_runner.go:130] >           "snapshotter": ""
	I1218 00:29:08.137787 1305484 command_runner.go:130] >         }
	I1218 00:29:08.137790 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137794 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137804 1305484 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1218 00:29:08.137817 1305484 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1218 00:29:08.137822 1305484 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1218 00:29:08.137828 1305484 command_runner.go:130] >     "disableApparmor": false,
	I1218 00:29:08.137835 1305484 command_runner.go:130] >     "disableHugetlbController": true,
	I1218 00:29:08.137840 1305484 command_runner.go:130] >     "disableProcMount": false,
	I1218 00:29:08.137844 1305484 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1218 00:29:08.137853 1305484 command_runner.go:130] >     "enableCDI": true,
	I1218 00:29:08.137857 1305484 command_runner.go:130] >     "enableSelinux": false,
	I1218 00:29:08.137862 1305484 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1218 00:29:08.137866 1305484 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1218 00:29:08.137871 1305484 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1218 00:29:08.137878 1305484 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1218 00:29:08.137882 1305484 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1218 00:29:08.137887 1305484 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1218 00:29:08.137894 1305484 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1218 00:29:08.137901 1305484 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137906 1305484 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1218 00:29:08.137921 1305484 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137929 1305484 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1218 00:29:08.137940 1305484 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1218 00:29:08.137943 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137947 1305484 command_runner.go:130] >   "features": {
	I1218 00:29:08.137952 1305484 command_runner.go:130] >     "supplemental_groups_policy": true
	I1218 00:29:08.137955 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137962 1305484 command_runner.go:130] >   "golang": "go1.24.9",
	I1218 00:29:08.137972 1305484 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137984 1305484 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137998 1305484 command_runner.go:130] >   "runtimeHandlers": [
	I1218 00:29:08.138001 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138005 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138009 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138019 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138022 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138025 1305484 command_runner.go:130] >     },
	I1218 00:29:08.138028 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138043 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138048 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138053 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138056 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138060 1305484 command_runner.go:130] >       "name": "runc"
	I1218 00:29:08.138065 1305484 command_runner.go:130] >     }
	I1218 00:29:08.138069 1305484 command_runner.go:130] >   ],
	I1218 00:29:08.138074 1305484 command_runner.go:130] >   "status": {
	I1218 00:29:08.138078 1305484 command_runner.go:130] >     "conditions": [
	I1218 00:29:08.138089 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138093 1305484 command_runner.go:130] >         "message": "",
	I1218 00:29:08.138097 1305484 command_runner.go:130] >         "reason": "",
	I1218 00:29:08.138101 1305484 command_runner.go:130] >         "status": true,
	I1218 00:29:08.138112 1305484 command_runner.go:130] >         "type": "RuntimeReady"
	I1218 00:29:08.138115 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138118 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138128 1305484 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1218 00:29:08.138137 1305484 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1218 00:29:08.138140 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138147 1305484 command_runner.go:130] >         "type": "NetworkReady"
	I1218 00:29:08.138150 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138155 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138178 1305484 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1218 00:29:08.138187 1305484 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1218 00:29:08.138192 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138197 1305484 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1218 00:29:08.138203 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138206 1305484 command_runner.go:130] >     ]
	I1218 00:29:08.138209 1305484 command_runner.go:130] >   }
	I1218 00:29:08.138212 1305484 command_runner.go:130] > }
	I1218 00:29:08.140863 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:08.140888 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:08.140910 1305484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:29:08.140937 1305484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:29:08.141052 1305484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:29:08.141124 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:29:08.148733 1305484 command_runner.go:130] > kubeadm
	I1218 00:29:08.148755 1305484 command_runner.go:130] > kubectl
	I1218 00:29:08.148759 1305484 command_runner.go:130] > kubelet
	I1218 00:29:08.149813 1305484 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:29:08.149929 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:29:08.157899 1305484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:29:08.171631 1305484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:29:08.185534 1305484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 00:29:08.199213 1305484 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:29:08.203261 1305484 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1218 00:29:08.203343 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:08.317482 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:08.643734 1305484 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:29:08.643804 1305484 certs.go:195] generating shared ca certs ...
	I1218 00:29:08.643833 1305484 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:08.644029 1305484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:29:08.644119 1305484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:29:08.644145 1305484 certs.go:257] generating profile certs ...
	I1218 00:29:08.644307 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:29:08.644441 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:29:08.644531 1305484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:29:08.644560 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 00:29:08.644603 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 00:29:08.644662 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 00:29:08.644693 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 00:29:08.644737 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 00:29:08.644768 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 00:29:08.644809 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 00:29:08.644841 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 00:29:08.644932 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:29:08.645003 1305484 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:29:08.645041 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:29:08.645094 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:29:08.645151 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:29:08.645217 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:29:08.645309 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:08.645380 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.645420 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.645463 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem -> /usr/share/ca-certificates/1261148.pem
	I1218 00:29:08.646318 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:29:08.666060 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:29:08.685232 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:29:08.704134 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:29:08.723554 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:29:08.741698 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:29:08.759300 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:29:08.777293 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:29:08.794355 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:29:08.812054 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:29:08.830087 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:29:08.847372 1305484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:29:08.860094 1305484 ssh_runner.go:195] Run: openssl version
	I1218 00:29:08.866090 1305484 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1218 00:29:08.866507 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.874034 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:29:08.881757 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885459 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885707 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885773 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.926478 1305484 command_runner.go:130] > 3ec20f2e
	I1218 00:29:08.926977 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:29:08.934462 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.941654 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:29:08.949245 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953111 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953171 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953238 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.993847 1305484 command_runner.go:130] > b5213941
	I1218 00:29:08.994434 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:29:09.002229 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.011682 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:29:09.020345 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025298 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025353 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025405 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.072271 1305484 command_runner.go:130] > 51391683
	I1218 00:29:09.072867 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:29:09.081208 1305484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085518 1305484 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085547 1305484 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1218 00:29:09.085554 1305484 command_runner.go:130] > Device: 259,1	Inode: 2346127     Links: 1
	I1218 00:29:09.085561 1305484 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:09.085576 1305484 command_runner.go:130] > Access: 2025-12-18 00:25:01.733890088 +0000
	I1218 00:29:09.085582 1305484 command_runner.go:130] > Modify: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085594 1305484 command_runner.go:130] > Change: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085606 1305484 command_runner.go:130] >  Birth: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085761 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:29:09.130673 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.131215 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:29:09.179276 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.179949 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:29:09.226958 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.227517 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:29:09.269182 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.269731 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:29:09.310659 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.311193 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:29:09.352162 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.352228 1305484 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:09.352303 1305484 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:29:09.352361 1305484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:29:09.379004 1305484 cri.go:89] found id: ""
	I1218 00:29:09.379101 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:29:09.386224 1305484 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 00:29:09.386247 1305484 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 00:29:09.386254 1305484 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 00:29:09.387165 1305484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:29:09.387182 1305484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:29:09.387261 1305484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:29:09.396523 1305484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:29:09.396996 1305484 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.397115 1305484 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232602" cluster setting kubeconfig missing "functional-232602" context setting]
	I1218 00:29:09.397401 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.397832 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.398029 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.398566 1305484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1218 00:29:09.398586 1305484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1218 00:29:09.398591 1305484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1218 00:29:09.398599 1305484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1218 00:29:09.398604 1305484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1218 00:29:09.398644 1305484 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1218 00:29:09.398857 1305484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:29:09.408050 1305484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1218 00:29:09.408132 1305484 kubeadm.go:602] duration metric: took 20.943322ms to restartPrimaryControlPlane
	I1218 00:29:09.408155 1305484 kubeadm.go:403] duration metric: took 55.931707ms to StartCluster
	I1218 00:29:09.408213 1305484 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.408302 1305484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.409063 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.409379 1305484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 00:29:09.409544 1305484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 00:29:09.409943 1305484 addons.go:70] Setting storage-provisioner=true in profile "functional-232602"
	I1218 00:29:09.409964 1305484 addons.go:239] Setting addon storage-provisioner=true in "functional-232602"
	I1218 00:29:09.409988 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.409637 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:09.410125 1305484 addons.go:70] Setting default-storageclass=true in profile "functional-232602"
	I1218 00:29:09.410148 1305484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232602"
	I1218 00:29:09.410443 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.410469 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.418864 1305484 out.go:179] * Verifying Kubernetes components...
	I1218 00:29:09.421814 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:09.464044 1305484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 00:29:09.465759 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.465914 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.466265 1305484 addons.go:239] Setting addon default-storageclass=true in "functional-232602"
	I1218 00:29:09.466296 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.466740 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.466941 1305484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.466952 1305484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 00:29:09.466995 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.523535 1305484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:09.523562 1305484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 00:29:09.523638 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.539603 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.550039 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.631300 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:09.666484 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.687810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.394630 1305484 node_ready.go:35] waiting up to 6m0s for node "functional-232602" to be "Ready" ...
	I1218 00:29:10.394645 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.394905 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.394947 1305484 retry.go:31] will retry after 177.31527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.395055 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.395073 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395086 1305484 retry.go:31] will retry after 150.104012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395151 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.545905 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.572498 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.615825 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.615864 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.615882 1305484 retry.go:31] will retry after 386.236336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650773 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.650838 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650865 1305484 retry.go:31] will retry after 280.734601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.894991 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.895069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.932808 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.998277 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.998407 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.998429 1305484 retry.go:31] will retry after 660.849815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.003467 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.066495 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.066548 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.066567 1305484 retry.go:31] will retry after 792.514458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.395083 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.659960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:11.722453 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.722493 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.722511 1305484 retry.go:31] will retry after 472.801155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.859919 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.895517 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.895589 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.895884 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.931975 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.936172 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.936234 1305484 retry.go:31] will retry after 583.966469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.195539 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:12.255280 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.259094 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.259131 1305484 retry.go:31] will retry after 926.212833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.395399 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.395475 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.395812 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:12.395919 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:12.520996 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:12.581638 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.581728 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.581762 1305484 retry.go:31] will retry after 1.65494693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.895402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.186032 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:13.243730 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:13.248249 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.248281 1305484 retry.go:31] will retry after 1.192911742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.395563 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.395681 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.395976 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.895848 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.895954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.896330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:14.237854 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:14.298889 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.302600 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.302641 1305484 retry.go:31] will retry after 1.5263786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.395779 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.395871 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.396209 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:14.396293 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:14.441356 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:14.508115 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.508165 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.508184 1305484 retry.go:31] will retry after 3.305911776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.895890 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.896219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.394975 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.395415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.829900 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:15.892510 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:15.892556 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.892574 1305484 retry.go:31] will retry after 3.944012673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.895725 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.895798 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.896127 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.394873 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.394951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.395246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.894968 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.895399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:16.895481 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:17.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:17.814960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:17.873346 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:17.873415 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.873437 1305484 retry.go:31] will retry after 2.287204088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.895511 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.895833 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.395764 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.395845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.396148 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.895440 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:19.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.395328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:19.836815 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:19.891772 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895038 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.895109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.895501 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895520 1305484 retry.go:31] will retry after 2.272181462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.160871 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:20.233754 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:20.233805 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.233824 1305484 retry.go:31] will retry after 9.03130365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.395392 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.395710 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:20.894916 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.894992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:21.395041 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.395135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.395466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:21.395525 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:21.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.895012 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.168810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:22.226105 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:22.229620 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.229649 1305484 retry.go:31] will retry after 6.326012676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.895280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.395383 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.895360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:23.895414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:24.395042 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.395119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:24.895109 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.895188 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.395358 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.395437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.395700 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.895538 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.895612 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.895906 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:25.895954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:26.395465 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.395571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.395892 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:26.895653 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.895735 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.395741 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.395852 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.396210 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.895939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.896273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:27.896328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:28.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:28.556610 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:28.617128 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:28.617182 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.617202 1305484 retry.go:31] will retry after 6.797257953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.895668 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.895975 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.265354 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:29.327180 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:29.327227 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.327246 1305484 retry.go:31] will retry after 10.081474738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.395481 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.395821 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.895626 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.895701 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:30.395476 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.395558 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.395870 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:30.395928 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:30.895674 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.895771 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.896102 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.395677 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.395765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.396042 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.895800 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.895892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.896225 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:32.395871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.395946 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.396238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:32.396286 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:32.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.894971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.895221 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.894995 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.895096 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.895485 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:34.895540 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:35.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.395369 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:35.415065 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:35.470618 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:35.474707 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.474739 1305484 retry.go:31] will retry after 12.346765183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.894884 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.895217 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.895297 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:37.395715 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.395786 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.396036 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:37.396085 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:37.895882 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.895957 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.896282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.395072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.395404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.395085 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.395413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.409781 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:39.473091 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:39.473144 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.473164 1305484 retry.go:31] will retry after 18.475103934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.895826 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.896182 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:39.896239 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:40.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.394986 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:40.894982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.895057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.395197 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.395487 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.894953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:42.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.395341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:42.395398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:42.895053 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.394921 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.894994 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.895439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:44.395145 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:44.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:44.895223 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.895291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.395338 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.396157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.395277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:46.895417 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:47.395091 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.395170 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:47.821776 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:47.880326 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:47.883900 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.883932 1305484 retry.go:31] will retry after 18.240859758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.895204 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.895522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.895186 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.895530 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:48.895589 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:49.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:49.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.395307 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.395385 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.395702 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.895512 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.895597 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.895908 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:50.895965 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:51.395762 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.395833 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.396181 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:51.894896 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.895266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.395005 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.395321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:53.395068 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.395156 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.395497 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:53.395555 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:53.894871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.895228 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.395496 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.395573 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.895684 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.895759 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.896113 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.394869 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.394953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.395245 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.895404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:55.895459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:56.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.395302 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:56.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.895034 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.948848 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:58.011608 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:58.015264 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.015303 1305484 retry.go:31] will retry after 17.396243449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.394927 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.395242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:58.395294 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:58.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.395011 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.894993 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:00.395507 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.395593 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.395898 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:00.395950 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:00.894850 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.894938 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.394969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.395325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.895392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.395062 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.395142 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.395460 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:02.895401 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:03.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.395392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:03.894963 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.895041 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.895359 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.394991 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.894956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:05.395299 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.395380 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.395678 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:05.395727 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:05.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.125881 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:06.190863 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:06.190916 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.190936 1305484 retry.go:31] will retry after 24.931144034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.395236 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.395314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.395677 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.895467 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.895550 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.895878 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:07.395628 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.395697 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.395955 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:07.395997 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:07.895729 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.895808 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.896074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.395873 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.395948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.396313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.894868 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.895208 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.895287 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.895612 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:09.895672 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:10.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.395353 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.395606 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:10.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.394972 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.395053 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.894959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.895219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:12.394924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:12.395391 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:12.894984 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.894985 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.895388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.395015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.395307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.894872 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.894951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.895211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:14.895252 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:15.395260 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.395713 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:15.411948 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:15.467885 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:15.471996 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.472026 1305484 retry.go:31] will retry after 23.671964263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.895665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.895991 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.395769 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.395850 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.396115 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.894852 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.894935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.895261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:16.895324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:17.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.394932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:17.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.895123 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.895201 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.895524 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:18.895581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:19.395822 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.395905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.396165 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:19.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.395230 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.395313 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.395645 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.895295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:21.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:21.395450 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:21.895115 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.895196 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.895514 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.394905 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.895345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.395045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.895065 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:23.895477 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:24.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.394989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.395346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:24.895052 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.895137 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.895433 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.395347 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.395641 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.895358 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.895437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.895746 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:25.895805 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:26.395602 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.395686 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.396014 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:26.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.895844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.896146 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.394944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.894978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.895055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.895365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:28.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.395236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:28.395276 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:28.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.395540 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.395625 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.395953 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.894937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.895190 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:30.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.395255 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.395559 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:30.395614 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:30.895284 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.895370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.895692 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.123262 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:31.181409 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.184938 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.185056 1305484 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:31.395353 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.395427 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.395686 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.895522 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.895971 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:32.395780 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.395853 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.396133 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:32.396184 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:32.894842 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.894921 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.895187 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.394857 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.394937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.894971 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.895325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:34.895435 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:35.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.395839 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:35.895696 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.895778 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.896070 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.395851 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.395932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.396284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.895427 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:36.895485 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:37.395134 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.395209 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.395462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:37.894924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.895330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:39.144879 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:39.206506 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206561 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206652 1305484 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:39.209780 1305484 out.go:179] * Enabled addons: 
	I1218 00:30:39.213292 1305484 addons.go:530] duration metric: took 1m29.803748848s for enable addons: enabled=[]
	I1218 00:30:39.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:39.395343 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:39.895241 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.895315 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.895674 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.395346 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.395421 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.395699 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.895493 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.895927 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:41.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.395901 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.396304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:41.396363 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:41.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.895079 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.895335 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.394978 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.395300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:43.895429 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:44.395104 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.395180 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.395503 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:44.894907 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.894987 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.895277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.394949 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:46.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.395018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:46.395324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:46.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.895453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:48.394988 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.395066 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:48.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:48.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.895329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.394998 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:50.395234 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.395312 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.395669 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:50.395726 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:50.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.895541 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.895800 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.395565 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.395643 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.895820 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.896139 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:52.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.395866 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:52.396147 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:52.894845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.894930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.895239 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.895246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.395001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.895019 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.895132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.895462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:54.895517 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:55.395382 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.395459 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.395747 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:55.895567 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.896004 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.395794 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.395876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.396202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.894918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.895248 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:57.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:57.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:57.895089 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.895163 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.895506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.395467 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.895216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:59.895259 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:00.395510 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.395606 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.395915 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:00.895683 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.895763 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.896072 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.395863 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.395942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.396196 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.894969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:01.895364 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:02.395506 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.395587 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.395926 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:02.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.895787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.394835 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.394918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.395241 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:03.895409 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:04.394887 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.395203 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:04.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.895585 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.395452 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.395534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.895595 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.895675 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.895945 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:05.895986 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:06.395824 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.395899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.396242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:06.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.395035 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.395109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.894960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:08.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.395097 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.395422 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:08.395475 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:08.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.895185 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.895437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.394963 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.395425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.894995 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:10.395562 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:10.895006 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.895092 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.895441 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.395247 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.395326 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.395703 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.895773 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.895839 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:12.395833 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.395908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.396246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:12.396315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:12.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.894941 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.895004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.895326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.394884 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.395283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.894810 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.894876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.895171 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:14.895233 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:15.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.395266 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.395614 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:15.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.895319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.394906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.395230 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:16.895449 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:17.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.395260 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.395607 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:17.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.895160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.895445 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.895357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:19.395005 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:19.395376 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:19.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.395282 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.395364 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.395694 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.895475 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.895552 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.895809 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:21.395604 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.395678 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.395990 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:21.396041 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:21.895659 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.895733 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.896015 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.395655 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.395728 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.395992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.895435 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.895515 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.895848 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:23.395649 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.395732 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:23.396134 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:23.895883 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.895960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.896252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.894847 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.895271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.395154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.395412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.895068 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.895475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:25.895531 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:26.395075 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.395488 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:26.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.895250 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.395377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.895371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:28.395072 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:28.395459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:28.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.895034 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.395100 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.395520 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.894938 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:30.395237 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.395365 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.395704 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:30.395760 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:30.895519 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.895940 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.395676 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.395750 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.396048 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.895809 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.895895 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.896244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.394971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.894900 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:32.895326 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:33.394994 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.395070 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.395437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:33.895135 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.895535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.395882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.395954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.396208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:34.895368 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:35.395101 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:35.895173 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.895249 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.895577 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.394992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.395327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.895323 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:37.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.395252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:37.395302 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:37.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.895332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.895059 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.895134 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:39.394962 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.395049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.395388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:39.395443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:39.895187 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.895635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.395589 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.895352 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.395047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.895073 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.895149 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.895412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:41.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:42.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:42.895106 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.895183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.895531 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.394891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.895424 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:43.895479 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:44.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.395368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:44.895047 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.895117 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.895407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.395328 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.395422 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.395783 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.895608 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.895699 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.896131 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:45.896187 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:46.394880 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:46.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.895051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.395116 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.395191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.395557 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.894966 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.895047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:48.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:48.395424 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:48.895132 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.895327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:50.395224 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.395303 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:50.395707 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:50.895406 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.895483 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.395554 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.395639 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.395931 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.895695 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.895768 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:52.395729 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.395811 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.396079 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:52.396127 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:52.895894 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.895969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.896306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.395050 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.395150 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.895062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.895316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.395011 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.895320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:54.895366 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:55.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.395291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.395575 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:55.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.895409 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.394936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.895032 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.895105 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:56.895458 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:57.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:57.895074 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.895154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.895479 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.394862 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.395279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.894867 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.895307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:59.394852 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.394934 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:59.395339 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:59.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.895849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.896110 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.395197 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.395298 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.395737 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.895502 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.895905 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:01.395709 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.395787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.396061 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:01.396105 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:01.895861 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.895937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.896281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.894996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.895072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.895410 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:03.895469 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:04.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.395044 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.395298 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:04.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.395270 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.395588 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.895256 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.895330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.895578 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:05.895621 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:06.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:06.894980 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.895071 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.895448 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:08.394964 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.395043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.395361 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:08.395415 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:08.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.895046 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.895131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.895449 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:10.395311 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.395381 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.395635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:10.395676 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:10.895273 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.895354 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.895754 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.395292 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.395374 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.395675 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.895376 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.895441 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.895684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:12.395437 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.395517 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.395849 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:12.395904 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:12.895550 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.895627 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.895939 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.395711 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.395791 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.895885 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.895958 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.896301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.394930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.395206 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.894945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.895220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:14.895266 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:15.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.395306 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.395672 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:15.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.895349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.395013 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.895366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:16.895425 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:17.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.895274 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.895192 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:18.895550 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:19.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.395195 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:19.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.895024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.895370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.395299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.395647 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.894993 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.895294 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:21.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.395303 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:21.395350 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:21.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.895043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.394838 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.394910 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.395188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:23.395047 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.395131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.395465 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:23.395520 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:23.894892 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.894964 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.895362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:25.395258 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.395335 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.395602 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:25.395653 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:25.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.895416 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.395052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.895075 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.895415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.394948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.895356 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:27.895426 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:28.395097 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.395171 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.395489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:28.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.895036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.395111 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.395193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.895559 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.895634 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.895935 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:29.895990 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:30.395759 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.395836 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.396159 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:30.894851 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.894931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.395281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:32.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.395132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:32.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:32.894897 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.895317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.395060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.895211 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.895286 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.895620 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.394801 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.394869 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.395114 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.894830 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.894907 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.895223 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:34.895273 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:35.395130 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:35.895126 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.895205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:36.895398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:37.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.394969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.395292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:37.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.394915 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.394990 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.895094 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.895411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:38.895465 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:39.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:39.895143 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.895225 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.395286 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.395370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.395636 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:41.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:41.395439 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:41.894881 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.894976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.394999 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.395081 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.395442 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.895025 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.895106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.895432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.394888 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.394966 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.395216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:43.895348 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:44.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:44.894831 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.894908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.895175 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.395389 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.395497 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.395880 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.895561 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.895646 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.895997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:45.896056 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:46.395702 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.395785 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.396046 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:46.895863 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.895935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.896257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.395439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:48.395027 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:48.395498 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:48.895164 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.895243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.895582 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.395264 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.395597 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.895474 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.895557 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:50.395724 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.395800 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.396111 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:50.396169 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:50.895876 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.895947 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.896202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.395401 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.895200 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.895548 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.395025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:52.895410 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:53.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.395162 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:53.895715 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.895783 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.896041 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.395464 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.395544 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.395863 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.895501 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:54.895971 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:55.395850 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.395924 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.396188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:55.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.895296 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.395115 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.395513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.895193 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.895259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.895583 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:57.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.395358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:57.395413 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:57.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.395771 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.395843 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.396103 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.895868 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.895950 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.896279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.394988 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.395315 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.895060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:59.895473 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:00.395531 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.395633 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:00.894904 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.895313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.394991 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.395320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.895358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:02.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.395021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.395373 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:02.395430 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:02.895092 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.895164 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.395411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.895093 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.394889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.395259 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.894989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:04.895395 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:05.395163 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.395243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.395682 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:05.895450 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.895524 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.895784 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.395568 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.395656 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.395978 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.895794 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.895874 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.896211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:06.896271 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:07.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:07.894962 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.895397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.394973 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.395407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.895172 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.895469 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:09.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:09.395444 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:09.895137 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.895212 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.895526 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.395259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.395579 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.895391 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.895474 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.895867 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:11.395660 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.395744 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.396081 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:11.396140 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:11.895822 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.895896 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.896157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.394896 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.394973 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.395034 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.395107 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.895385 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:13.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:14.395141 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.395215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:14.895214 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.895295 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.895592 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.395316 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.395398 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.395758 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.895576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.895992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:15.896047 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:16.395754 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.396096 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:16.895867 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.895943 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.896286 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.395428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.895235 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:18.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.395037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:18.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:18.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.395272 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.895438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:20.395201 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.395308 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.395646 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:20.395698 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:20.895422 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.895490 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.395521 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.395598 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.395947 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.895610 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.895689 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.896027 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:22.395778 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.395849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.396108 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:22.396151 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:22.894879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.894954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.895254 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.895018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.395023 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.395432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:24.895433 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:25.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:25.895136 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.895539 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.395250 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.395706 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.895534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.895793 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:26.895834 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:27.395582 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.395665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.396005 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:27.895686 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.895765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.896121 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.395755 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.396080 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.895931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.896264 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:28.896319 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:29.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.395342 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:29.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.895400 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.395313 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.395390 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.395741 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.895528 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.895610 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.895946 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:31.395576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.395644 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.395889 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:31.395930 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:31.895675 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.895753 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.896082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.394834 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.894964 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.895091 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.895177 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:33.895563 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:34.394882 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.394955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:34.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.395153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.894873 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.895257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:36.394950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.395348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:36.395402 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:36.895071 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.895476 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.395268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.395002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.895305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:38.895353 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:39.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:39.895212 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.895299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.895609 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.395293 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.395361 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.395613 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.895328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:40.895383 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:41.395069 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.395147 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.395453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:41.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.895138 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.895215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.895542 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:42.895601 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:43.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:43.895604 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.895677 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.395290 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.395367 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.395718 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.895507 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.895582 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.895842 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:44.895892 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:45.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:45.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.395070 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.395160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.395494 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.894943 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.895019 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:47.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.395069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.395419 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:47.395483 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:47.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.894965 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.895236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.394934 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.395366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.895481 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:49.395814 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.395888 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.396152 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:49.396201 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:49.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.395242 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.395323 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.395662 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.894942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.895212 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.895127 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.895213 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.895688 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:51.895762 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:52.395524 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.395609 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.395929 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:52.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.895845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.896160 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.395295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.894861 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:54.394811 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.394887 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.395224 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:54.395284 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:54.895871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.895944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.896276 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.395236 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.895000 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.895285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:56.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:56.395441 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:56.895820 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.895899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.896155 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.894987 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.895413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:58.395076 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.395146 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.395477 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:58.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:58.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.395049 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.395125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.894984 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:00.395314 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.395415 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.395786 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:00.395854 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:00.895591 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.895666 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.896029 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.395664 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.395997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.895814 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.895904 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.896249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.394968 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.395421 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.895193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.895464 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:02.895507 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:03.395162 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.395245 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.395584 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:03.895306 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.895387 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.395125 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.395233 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.395547 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.895240 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.895314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.895659 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:04.895713 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:05.395523 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.395602 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:05.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.895784 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.896083 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.395846 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.395920 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.396255 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.894862 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:07.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.395319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:07.395361 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:07.895013 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.895141 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.895473 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.395190 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.395601 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.895088 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.895159 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:09.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.395397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:09.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:09.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.395240 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.395490 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.895174 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.895254 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:11.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.395429 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:11.395490 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:11.895021 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.895089 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.395645 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.395720 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.396082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.895753 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.895830 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.896143 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.394854 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.394925 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.395193 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.895010 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.895299 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:13.895347 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:14.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:14.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.895129 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.395317 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.395394 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.395684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.895487 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.895571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.895903 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:15.895957 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:16.395670 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.395998 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:16.895851 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.895945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.896285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.395074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.895249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:18.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.395317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:18.395371 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:18.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.895376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.394872 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.895389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:20.395179 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.395604 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:20.395662 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:20.894898 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.895244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.395016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.894952 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.394923 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.395310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:22.895406 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:23.395099 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.395522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:23.895196 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.895267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.394919 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.394997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.395328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.895049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:24.895443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:25.395131 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.395205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.395456 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:25.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.895301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:27.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.395004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:27.395386 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:27.895785 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.895857 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.896201 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.394885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.395288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:29.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.395527 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:29.395588 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:29.894812 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.894881 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.895140 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.395146 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.395230 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.395562 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.894965 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.895039 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.895125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.895444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:31.895519 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:32.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:32.895139 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.895468 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.394926 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.895321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:34.394900 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.394970 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.395227 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:34.395268 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:34.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.395150 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.395242 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.395581 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.895262 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.895333 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.895655 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:36.395446 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.395526 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.395891 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:36.395954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:36.895879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.896025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.896489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.395256 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.395590 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.395094 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.395175 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:38.895318 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:39.394981 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:39.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.395255 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.395330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.395611 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.895417 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.895495 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.895856 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:40.895911 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:41.395671 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.395749 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.396075 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:41.895770 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.895842 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.394861 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.394945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:43.394987 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.395349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:43.395397 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:43.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.895331 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.395167 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.395534 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.895001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:45.395381 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.395465 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.395835 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:45.395899 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:45.895622 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.895696 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.896010 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.395697 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.395815 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.396068 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.895828 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.895903 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.896238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.394829 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.394914 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.395208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.894909 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.895256 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:47.895315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:48.394935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.395013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.895191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.395252 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.395319 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.395570 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.895468 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.895542 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.895868 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:49.895924 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:50.395784 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.395860 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.396189 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:50.895823 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.895905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.896170 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.394877 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.394954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.395305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.895290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:52.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.394961 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.395282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:52.395333 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:52.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.895119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.895493 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.395297 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.395619 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.894885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.894963 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.895214 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:54.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.395306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:54.395365 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:54.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.395135 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.395210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:56.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.395029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:56.395422 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:56.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.895133 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.895056 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.895135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.895491 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:58.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.395253 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.395564 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:58.395616 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:58.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.895017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.395042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.894955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.895253 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:00.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.395351 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:00.395696 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:00.895585 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.895660 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.895999 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.395773 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.395844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.396106 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.895887 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.895974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.896290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.394993 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.395076 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.395438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.895141 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.895226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.895545 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:02.895597 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.395370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:03.895085 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.895169 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.895513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.395827 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.395892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.396191 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:05.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.395239 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:05.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:05.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.895226 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.395376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.395122 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.395495 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:07.895403 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:08.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:08.895048 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.895123 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.895471 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.395187 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.395657 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.895568 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.895676 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.896021 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:09.896082 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:10.395155 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:10.395216 1305484 node_ready.go:38] duration metric: took 6m0.000503053s for node "functional-232602" to be "Ready" ...
	I1218 00:35:10.402744 1305484 out.go:203] 
	W1218 00:35:10.405748 1305484 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 00:35:10.405971 1305484 out.go:285] * 
	W1218 00:35:10.408384 1305484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:35:10.411337 1305484 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866402430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866417380Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866460874Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866476414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866485874Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866499339Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866509103Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866525422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866545812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866575858Z" level=info msg="Connect containerd service"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866870260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.867476540Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886162821Z" level=info msg="Start subscribing containerd event"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886328101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886539920Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886400657Z" level=info msg="Start recovering state"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.944741362Z" level=info msg="Start event monitor"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.944959236Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945049663Z" level=info msg="Start streaming server"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945134772Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945359522Z" level=info msg="runtime interface starting up..."
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945434680Z" level=info msg="starting plugins..."
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945497316Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 00:29:07 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.947920852Z" level=info msg="containerd successfully booted in 0.102488s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:35:12.138722    8413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:12.139145    8413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:12.140808    8413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:12.141178    8413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:12.142778    8413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:35:12 up  7:17,  0 user,  load average: 0.11, 0.24, 0.64
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:35:08 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:09 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 18 00:35:09 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:09 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:09 functional-232602 kubelet[8298]: E1218 00:35:09.689395    8298 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:09 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:09 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:10 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 18 00:35:10 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:10 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:10 functional-232602 kubelet[8303]: E1218 00:35:10.451257    8303 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:10 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:10 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 18 00:35:11 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:11 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:11 functional-232602 kubelet[8316]: E1218 00:35:11.217229    8316 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 18 00:35:11 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:11 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:11 functional-232602 kubelet[8363]: E1218 00:35:11.957381    8363 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (380.959826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (367.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-232602 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-232602 get po -A: exit status 1 (58.407398ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-232602 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-232602 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-232602 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (333.028098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464                     │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount-9p | grep 9p                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh -- ls -la /mount-9p                                                                                                             │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh sudo umount -f /mount-9p                                                                                                        │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount3 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount2 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount1                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ mount          │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount1 --alsologtostderr -v=1                                    │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ ssh            │ functional-739047 ssh findmnt -T /mount1                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh findmnt -T /mount2                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh findmnt -T /mount3                                                                                                              │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ mount          │ -p functional-739047 --kill=true                                                                                                                      │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ update-context │ functional-739047 update-context --alsologtostderr -v=2                                                                                               │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format short --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh            │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image          │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image          │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete         │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start          │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ start          │ -p functional-232602 --alsologtostderr -v=8                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:29:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:29:05.243654 1305484 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:29:05.243837 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.243867 1305484 out.go:374] Setting ErrFile to fd 2...
	I1218 00:29:05.243888 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.244277 1305484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:29:05.244868 1305484 out.go:368] Setting JSON to false
	I1218 00:29:05.245808 1305484 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25892,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:29:05.245939 1305484 start.go:143] virtualization:  
	I1218 00:29:05.249423 1305484 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:29:05.253059 1305484 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:29:05.253187 1305484 notify.go:221] Checking for updates...
	I1218 00:29:05.259241 1305484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:29:05.262171 1305484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:05.265173 1305484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:29:05.268135 1305484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:29:05.270950 1305484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:29:05.274293 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:05.274440 1305484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:29:05.308275 1305484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:29:05.308407 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.375725 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.366230286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.375834 1305484 docker.go:319] overlay module found
	I1218 00:29:05.378939 1305484 out.go:179] * Using the docker driver based on existing profile
	I1218 00:29:05.381619 1305484 start.go:309] selected driver: docker
	I1218 00:29:05.381657 1305484 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.381752 1305484 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:29:05.381892 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.440724 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.431205912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.441147 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:05.441215 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:05.441270 1305484 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.444475 1305484 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:29:05.447488 1305484 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:29:05.450519 1305484 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:29:05.453580 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:05.453631 1305484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:29:05.453641 1305484 cache.go:65] Caching tarball of preloaded images
	I1218 00:29:05.453681 1305484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:29:05.453745 1305484 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:29:05.453756 1305484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:29:05.453862 1305484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:29:05.474116 1305484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:29:05.474140 1305484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:29:05.474160 1305484 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:29:05.474205 1305484 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:29:05.474271 1305484 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "functional-232602"
	I1218 00:29:05.474294 1305484 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:29:05.474305 1305484 fix.go:54] fixHost starting: 
	I1218 00:29:05.474585 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:05.494473 1305484 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:29:05.494511 1305484 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:29:05.497625 1305484 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:29:05.497657 1305484 machine.go:94] provisionDockerMachine start ...
	I1218 00:29:05.497756 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.514682 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.515020 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.515044 1305484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:29:05.668376 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.668400 1305484 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:29:05.668465 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.700140 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.700482 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.700495 1305484 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:29:05.865944 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.866034 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.884487 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.884983 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.885010 1305484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:29:06.041516 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:29:06.041541 1305484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:29:06.041561 1305484 ubuntu.go:190] setting up certificates
	I1218 00:29:06.041572 1305484 provision.go:84] configureAuth start
	I1218 00:29:06.041652 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.060898 1305484 provision.go:143] copyHostCerts
	I1218 00:29:06.060951 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.060994 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:29:06.061002 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.061080 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:29:06.061163 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061182 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:29:06.061187 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061215 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:29:06.061256 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061273 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:29:06.061277 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061301 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:29:06.061349 1305484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:29:06.177802 1305484 provision.go:177] copyRemoteCerts
	I1218 00:29:06.177898 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:29:06.177967 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.195440 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.308765 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 00:29:06.308835 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:29:06.326972 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 00:29:06.327095 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:29:06.345137 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 00:29:06.345225 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:29:06.363588 1305484 provision.go:87] duration metric: took 321.991809ms to configureAuth
	I1218 00:29:06.363617 1305484 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:29:06.363812 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:06.363826 1305484 machine.go:97] duration metric: took 866.163062ms to provisionDockerMachine
	I1218 00:29:06.363833 1305484 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:29:06.363845 1305484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:29:06.363904 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:29:06.363949 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.381445 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.493044 1305484 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:29:06.496574 1305484 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1218 00:29:06.496595 1305484 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1218 00:29:06.496599 1305484 command_runner.go:130] > VERSION_ID="12"
	I1218 00:29:06.496604 1305484 command_runner.go:130] > VERSION="12 (bookworm)"
	I1218 00:29:06.496612 1305484 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1218 00:29:06.496615 1305484 command_runner.go:130] > ID=debian
	I1218 00:29:06.496641 1305484 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1218 00:29:06.496649 1305484 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1218 00:29:06.496655 1305484 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1218 00:29:06.496744 1305484 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:29:06.496762 1305484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:29:06.496773 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:29:06.496837 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:29:06.496920 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:29:06.496932 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /etc/ssl/certs/12611482.pem
	I1218 00:29:06.497013 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:29:06.497022 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> /etc/test/nested/copy/1261148/hosts
	I1218 00:29:06.497083 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:29:06.504772 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:06.523736 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:29:06.542759 1305484 start.go:296] duration metric: took 178.908993ms for postStartSetup
	I1218 00:29:06.542856 1305484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:29:06.542901 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.560753 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.665778 1305484 command_runner.go:130] > 18%
	I1218 00:29:06.665854 1305484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:29:06.671095 1305484 command_runner.go:130] > 160G
	I1218 00:29:06.671651 1305484 fix.go:56] duration metric: took 1.19734099s for fixHost
	I1218 00:29:06.671671 1305484 start.go:83] releasing machines lock for "functional-232602", held for 1.197387766s
	I1218 00:29:06.671738 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.688941 1305484 ssh_runner.go:195] Run: cat /version.json
	I1218 00:29:06.689003 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.689377 1305484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:29:06.689435 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.710307 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.721003 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.812429 1305484 command_runner.go:130] > {"iso_version": "v1.37.0-1765846775-22141", "kicbase_version": "v0.0.48-1765966054-22186", "minikube_version": "v1.37.0", "commit": "c344550999bcbb78f38b2df057224788bb2d30b2"}
	I1218 00:29:06.812585 1305484 ssh_runner.go:195] Run: systemctl --version
	I1218 00:29:06.910410 1305484 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 00:29:06.913301 1305484 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1218 00:29:06.913347 1305484 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1218 00:29:06.913421 1305484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 00:29:06.917811 1305484 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 00:29:06.917849 1305484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:29:06.917931 1305484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:29:06.925837 1305484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:29:06.925861 1305484 start.go:496] detecting cgroup driver to use...
	I1218 00:29:06.925891 1305484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:29:06.925936 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:29:06.941416 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:29:06.954870 1305484 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:29:06.954953 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:29:06.971407 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:29:06.985680 1305484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:29:07.097075 1305484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:29:07.240817 1305484 docker.go:234] disabling docker service ...
	I1218 00:29:07.240965 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:29:07.256804 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:29:07.271026 1305484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:29:07.407005 1305484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:29:07.534286 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:29:07.548592 1305484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:29:07.562819 1305484 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 00:29:07.564071 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:29:07.574541 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:29:07.583515 1305484 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:29:07.583615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:29:07.592330 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.601414 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:29:07.610399 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.619445 1305484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:29:07.627615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:29:07.637099 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:29:07.646771 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:29:07.656000 1305484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:29:07.663026 1305484 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 00:29:07.664029 1305484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:29:07.671707 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:07.789368 1305484 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:29:07.948156 1305484 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:29:07.948230 1305484 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:29:07.952108 1305484 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1218 00:29:07.952130 1305484 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 00:29:07.952136 1305484 command_runner.go:130] > Device: 0,72	Inode: 1611        Links: 1
	I1218 00:29:07.952144 1305484 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:07.952150 1305484 command_runner.go:130] > Access: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952154 1305484 command_runner.go:130] > Modify: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952160 1305484 command_runner.go:130] > Change: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952164 1305484 command_runner.go:130] >  Birth: -
	I1218 00:29:07.952461 1305484 start.go:564] Will wait 60s for crictl version
	I1218 00:29:07.952520 1305484 ssh_runner.go:195] Run: which crictl
	I1218 00:29:07.958389 1305484 command_runner.go:130] > /usr/local/bin/crictl
	I1218 00:29:07.959041 1305484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:29:07.980682 1305484 command_runner.go:130] > Version:  0.1.0
	I1218 00:29:07.980702 1305484 command_runner.go:130] > RuntimeName:  containerd
	I1218 00:29:07.980709 1305484 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1218 00:29:07.980714 1305484 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 00:29:07.982988 1305484 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:29:07.983059 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.002890 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.002977 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.027238 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.034949 1305484 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:29:08.037919 1305484 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:29:08.055210 1305484 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:29:08.059294 1305484 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1218 00:29:08.059421 1305484 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:29:08.059535 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:08.059617 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.084496 1305484 command_runner.go:130] > {
	I1218 00:29:08.084519 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.084525 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084534 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.084540 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084546 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.084550 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084554 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084566 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.084574 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084578 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.084582 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084589 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084593 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084596 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084609 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.084616 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084642 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.084646 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084651 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084659 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.084666 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084671 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.084678 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084682 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084686 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084689 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084696 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.084705 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084716 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.084722 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084731 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084739 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.084751 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084756 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.084760 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.084764 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084768 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084777 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084786 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.084791 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084802 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.084805 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084810 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084818 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.084824 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084829 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.084835 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084839 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084851 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084855 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084860 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084863 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084868 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084876 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.084883 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084888 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.084892 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084896 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084905 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.084917 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084922 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.084929 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084943 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084946 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084957 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084961 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084965 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084968 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084975 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.084983 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084991 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.084998 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085003 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085019 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.085026 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085033 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.085037 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085041 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085044 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085050 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085054 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085057 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085060 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085067 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.085073 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085078 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.085084 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085088 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085106 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.085110 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085114 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.085124 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085128 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085132 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085138 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085148 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.085153 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085160 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.085166 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085170 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085182 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.085191 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085195 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.085199 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085203 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085206 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085224 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085228 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085231 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085235 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085244 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.085252 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085258 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.085264 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085270 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085278 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.085287 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085291 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.085296 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085300 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.085306 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085313 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085317 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.085320 1305484 command_runner.go:130] >     }
	I1218 00:29:08.085323 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.085325 1305484 command_runner.go:130] > }
	I1218 00:29:08.087939 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.087964 1305484 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:29:08.088036 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.111236 1305484 command_runner.go:130] > {
	I1218 00:29:08.111264 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.111269 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111279 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.111286 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111295 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.111298 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111302 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111311 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.111318 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111322 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.111330 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111334 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111337 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111340 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111347 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.111352 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111358 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.111364 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111368 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111379 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.111391 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111396 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.111400 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111404 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111407 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111410 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111417 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.111421 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111426 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.111429 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111437 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111447 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.111454 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111462 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.111467 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.111475 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111478 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111483 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111491 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.111499 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111504 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.111507 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111511 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111519 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.111522 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111527 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.111533 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111537 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111543 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111547 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111559 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111562 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111565 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111573 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.111580 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111585 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.111588 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111592 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111600 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.111606 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111611 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.111617 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111626 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111632 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111635 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111639 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111646 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111652 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111659 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.111662 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111668 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.111671 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111676 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111690 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.111697 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111701 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.111707 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111711 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111716 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111720 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111739 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111742 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111746 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111755 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.111759 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111768 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.111771 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111775 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111785 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.111798 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111802 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.111805 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111809 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111813 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111816 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111825 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.111835 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111840 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.111843 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111855 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111866 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.111872 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111876 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.111880 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111884 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111889 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111893 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111899 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111903 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111913 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111921 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.111925 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111929 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.111933 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111937 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111947 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.111959 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111963 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.111967 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111971 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.111978 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111982 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111989 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.111992 1305484 command_runner.go:130] >     }
	I1218 00:29:08.112001 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.112004 1305484 command_runner.go:130] > }
	I1218 00:29:08.114369 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.114392 1305484 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:29:08.114401 1305484 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:29:08.114566 1305484 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:29:08.114639 1305484 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:29:08.137373 1305484 command_runner.go:130] > {
	I1218 00:29:08.137395 1305484 command_runner.go:130] >   "cniconfig": {
	I1218 00:29:08.137400 1305484 command_runner.go:130] >     "Networks": [
	I1218 00:29:08.137405 1305484 command_runner.go:130] >       {
	I1218 00:29:08.137411 1305484 command_runner.go:130] >         "Config": {
	I1218 00:29:08.137420 1305484 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1218 00:29:08.137425 1305484 command_runner.go:130] >           "Name": "cni-loopback",
	I1218 00:29:08.137430 1305484 command_runner.go:130] >           "Plugins": [
	I1218 00:29:08.137433 1305484 command_runner.go:130] >             {
	I1218 00:29:08.137438 1305484 command_runner.go:130] >               "Network": {
	I1218 00:29:08.137442 1305484 command_runner.go:130] >                 "ipam": {},
	I1218 00:29:08.137452 1305484 command_runner.go:130] >                 "type": "loopback"
	I1218 00:29:08.137456 1305484 command_runner.go:130] >               },
	I1218 00:29:08.137463 1305484 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1218 00:29:08.137467 1305484 command_runner.go:130] >             }
	I1218 00:29:08.137470 1305484 command_runner.go:130] >           ],
	I1218 00:29:08.137483 1305484 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1218 00:29:08.137489 1305484 command_runner.go:130] >         },
	I1218 00:29:08.137494 1305484 command_runner.go:130] >         "IFName": "lo"
	I1218 00:29:08.137498 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137503 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137508 1305484 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1218 00:29:08.137515 1305484 command_runner.go:130] >     "PluginDirs": [
	I1218 00:29:08.137519 1305484 command_runner.go:130] >       "/opt/cni/bin"
	I1218 00:29:08.137522 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137526 1305484 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1218 00:29:08.137529 1305484 command_runner.go:130] >     "Prefix": "eth"
	I1218 00:29:08.137533 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137536 1305484 command_runner.go:130] >   "config": {
	I1218 00:29:08.137540 1305484 command_runner.go:130] >     "cdiSpecDirs": [
	I1218 00:29:08.137544 1305484 command_runner.go:130] >       "/etc/cdi",
	I1218 00:29:08.137554 1305484 command_runner.go:130] >       "/var/run/cdi"
	I1218 00:29:08.137569 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137573 1305484 command_runner.go:130] >     "cni": {
	I1218 00:29:08.137576 1305484 command_runner.go:130] >       "binDir": "",
	I1218 00:29:08.137580 1305484 command_runner.go:130] >       "binDirs": [
	I1218 00:29:08.137584 1305484 command_runner.go:130] >         "/opt/cni/bin"
	I1218 00:29:08.137587 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.137591 1305484 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1218 00:29:08.137595 1305484 command_runner.go:130] >       "confTemplate": "",
	I1218 00:29:08.137598 1305484 command_runner.go:130] >       "ipPref": "",
	I1218 00:29:08.137602 1305484 command_runner.go:130] >       "maxConfNum": 1,
	I1218 00:29:08.137606 1305484 command_runner.go:130] >       "setupSerially": false,
	I1218 00:29:08.137610 1305484 command_runner.go:130] >       "useInternalLoopback": false
	I1218 00:29:08.137613 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137620 1305484 command_runner.go:130] >     "containerd": {
	I1218 00:29:08.137627 1305484 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1218 00:29:08.137632 1305484 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1218 00:29:08.137639 1305484 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1218 00:29:08.137645 1305484 command_runner.go:130] >       "runtimes": {
	I1218 00:29:08.137648 1305484 command_runner.go:130] >         "runc": {
	I1218 00:29:08.137654 1305484 command_runner.go:130] >           "ContainerAnnotations": null,
	I1218 00:29:08.137665 1305484 command_runner.go:130] >           "PodAnnotations": null,
	I1218 00:29:08.137670 1305484 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1218 00:29:08.137674 1305484 command_runner.go:130] >           "cgroupWritable": false,
	I1218 00:29:08.137679 1305484 command_runner.go:130] >           "cniConfDir": "",
	I1218 00:29:08.137685 1305484 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1218 00:29:08.137689 1305484 command_runner.go:130] >           "io_type": "",
	I1218 00:29:08.137695 1305484 command_runner.go:130] >           "options": {
	I1218 00:29:08.137699 1305484 command_runner.go:130] >             "BinaryName": "",
	I1218 00:29:08.137703 1305484 command_runner.go:130] >             "CriuImagePath": "",
	I1218 00:29:08.137707 1305484 command_runner.go:130] >             "CriuWorkPath": "",
	I1218 00:29:08.137710 1305484 command_runner.go:130] >             "IoGid": 0,
	I1218 00:29:08.137715 1305484 command_runner.go:130] >             "IoUid": 0,
	I1218 00:29:08.137726 1305484 command_runner.go:130] >             "NoNewKeyring": false,
	I1218 00:29:08.137734 1305484 command_runner.go:130] >             "Root": "",
	I1218 00:29:08.137738 1305484 command_runner.go:130] >             "ShimCgroup": "",
	I1218 00:29:08.137742 1305484 command_runner.go:130] >             "SystemdCgroup": false
	I1218 00:29:08.137746 1305484 command_runner.go:130] >           },
	I1218 00:29:08.137752 1305484 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1218 00:29:08.137761 1305484 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1218 00:29:08.137764 1305484 command_runner.go:130] >           "runtimePath": "",
	I1218 00:29:08.137770 1305484 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1218 00:29:08.137780 1305484 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1218 00:29:08.137784 1305484 command_runner.go:130] >           "snapshotter": ""
	I1218 00:29:08.137787 1305484 command_runner.go:130] >         }
	I1218 00:29:08.137790 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137794 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137804 1305484 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1218 00:29:08.137817 1305484 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1218 00:29:08.137822 1305484 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1218 00:29:08.137828 1305484 command_runner.go:130] >     "disableApparmor": false,
	I1218 00:29:08.137835 1305484 command_runner.go:130] >     "disableHugetlbController": true,
	I1218 00:29:08.137840 1305484 command_runner.go:130] >     "disableProcMount": false,
	I1218 00:29:08.137844 1305484 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1218 00:29:08.137853 1305484 command_runner.go:130] >     "enableCDI": true,
	I1218 00:29:08.137857 1305484 command_runner.go:130] >     "enableSelinux": false,
	I1218 00:29:08.137862 1305484 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1218 00:29:08.137866 1305484 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1218 00:29:08.137871 1305484 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1218 00:29:08.137878 1305484 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1218 00:29:08.137882 1305484 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1218 00:29:08.137887 1305484 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1218 00:29:08.137894 1305484 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1218 00:29:08.137901 1305484 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137906 1305484 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1218 00:29:08.137921 1305484 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137929 1305484 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1218 00:29:08.137940 1305484 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1218 00:29:08.137943 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137947 1305484 command_runner.go:130] >   "features": {
	I1218 00:29:08.137952 1305484 command_runner.go:130] >     "supplemental_groups_policy": true
	I1218 00:29:08.137955 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137962 1305484 command_runner.go:130] >   "golang": "go1.24.9",
	I1218 00:29:08.137972 1305484 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137984 1305484 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137998 1305484 command_runner.go:130] >   "runtimeHandlers": [
	I1218 00:29:08.138001 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138005 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138009 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138019 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138022 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138025 1305484 command_runner.go:130] >     },
	I1218 00:29:08.138028 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138043 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138048 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138053 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138056 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138060 1305484 command_runner.go:130] >       "name": "runc"
	I1218 00:29:08.138065 1305484 command_runner.go:130] >     }
	I1218 00:29:08.138069 1305484 command_runner.go:130] >   ],
	I1218 00:29:08.138074 1305484 command_runner.go:130] >   "status": {
	I1218 00:29:08.138078 1305484 command_runner.go:130] >     "conditions": [
	I1218 00:29:08.138089 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138093 1305484 command_runner.go:130] >         "message": "",
	I1218 00:29:08.138097 1305484 command_runner.go:130] >         "reason": "",
	I1218 00:29:08.138101 1305484 command_runner.go:130] >         "status": true,
	I1218 00:29:08.138112 1305484 command_runner.go:130] >         "type": "RuntimeReady"
	I1218 00:29:08.138115 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138118 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138128 1305484 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1218 00:29:08.138137 1305484 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1218 00:29:08.138140 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138147 1305484 command_runner.go:130] >         "type": "NetworkReady"
	I1218 00:29:08.138150 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138155 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138178 1305484 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1218 00:29:08.138187 1305484 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1218 00:29:08.138192 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138197 1305484 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1218 00:29:08.138203 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138206 1305484 command_runner.go:130] >     ]
	I1218 00:29:08.138209 1305484 command_runner.go:130] >   }
	I1218 00:29:08.138212 1305484 command_runner.go:130] > }
	I1218 00:29:08.140863 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:08.140888 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:08.140910 1305484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:29:08.140937 1305484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:29:08.141052 1305484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:29:08.141124 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:29:08.148733 1305484 command_runner.go:130] > kubeadm
	I1218 00:29:08.148755 1305484 command_runner.go:130] > kubectl
	I1218 00:29:08.148759 1305484 command_runner.go:130] > kubelet
	I1218 00:29:08.149813 1305484 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:29:08.149929 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:29:08.157899 1305484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:29:08.171631 1305484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:29:08.185534 1305484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 00:29:08.199213 1305484 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:29:08.203261 1305484 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1218 00:29:08.203343 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:08.317482 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:08.643734 1305484 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:29:08.643804 1305484 certs.go:195] generating shared ca certs ...
	I1218 00:29:08.643833 1305484 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:08.644029 1305484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:29:08.644119 1305484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:29:08.644145 1305484 certs.go:257] generating profile certs ...
	I1218 00:29:08.644307 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:29:08.644441 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:29:08.644531 1305484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:29:08.644560 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 00:29:08.644603 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 00:29:08.644662 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 00:29:08.644693 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 00:29:08.644737 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 00:29:08.644768 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 00:29:08.644809 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 00:29:08.644841 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 00:29:08.644932 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:29:08.645003 1305484 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:29:08.645041 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:29:08.645094 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:29:08.645151 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:29:08.645217 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:29:08.645309 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:08.645380 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.645420 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.645463 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem -> /usr/share/ca-certificates/1261148.pem
	I1218 00:29:08.646318 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:29:08.666060 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:29:08.685232 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:29:08.704134 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:29:08.723554 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:29:08.741698 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:29:08.759300 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:29:08.777293 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:29:08.794355 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:29:08.812054 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:29:08.830087 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:29:08.847372 1305484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:29:08.860094 1305484 ssh_runner.go:195] Run: openssl version
	I1218 00:29:08.866090 1305484 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1218 00:29:08.866507 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.874034 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:29:08.881757 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885459 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885707 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885773 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.926478 1305484 command_runner.go:130] > 3ec20f2e
	I1218 00:29:08.926977 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:29:08.934462 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.941654 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:29:08.949245 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953111 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953171 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953238 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.993847 1305484 command_runner.go:130] > b5213941
	I1218 00:29:08.994434 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:29:09.002229 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.011682 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:29:09.020345 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025298 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025353 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025405 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.072271 1305484 command_runner.go:130] > 51391683
	I1218 00:29:09.072867 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:29:09.081208 1305484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085518 1305484 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085547 1305484 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1218 00:29:09.085554 1305484 command_runner.go:130] > Device: 259,1	Inode: 2346127     Links: 1
	I1218 00:29:09.085561 1305484 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:09.085576 1305484 command_runner.go:130] > Access: 2025-12-18 00:25:01.733890088 +0000
	I1218 00:29:09.085582 1305484 command_runner.go:130] > Modify: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085594 1305484 command_runner.go:130] > Change: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085606 1305484 command_runner.go:130] >  Birth: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085761 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:29:09.130673 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.131215 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:29:09.179276 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.179949 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:29:09.226958 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.227517 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:29:09.269182 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.269731 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:29:09.310659 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.311193 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:29:09.352162 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.352228 1305484 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:09.352303 1305484 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:29:09.352361 1305484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:29:09.379004 1305484 cri.go:89] found id: ""
	I1218 00:29:09.379101 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:29:09.386224 1305484 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 00:29:09.386247 1305484 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 00:29:09.386254 1305484 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 00:29:09.387165 1305484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:29:09.387182 1305484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:29:09.387261 1305484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:29:09.396523 1305484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:29:09.396996 1305484 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.397115 1305484 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232602" cluster setting kubeconfig missing "functional-232602" context setting]
	I1218 00:29:09.397401 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.397832 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.398029 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.398566 1305484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1218 00:29:09.398586 1305484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1218 00:29:09.398591 1305484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1218 00:29:09.398599 1305484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1218 00:29:09.398604 1305484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1218 00:29:09.398644 1305484 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1218 00:29:09.398857 1305484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:29:09.408050 1305484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1218 00:29:09.408132 1305484 kubeadm.go:602] duration metric: took 20.943322ms to restartPrimaryControlPlane
	I1218 00:29:09.408155 1305484 kubeadm.go:403] duration metric: took 55.931707ms to StartCluster
	I1218 00:29:09.408213 1305484 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.408302 1305484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.409063 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.409379 1305484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 00:29:09.409544 1305484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 00:29:09.409943 1305484 addons.go:70] Setting storage-provisioner=true in profile "functional-232602"
	I1218 00:29:09.409964 1305484 addons.go:239] Setting addon storage-provisioner=true in "functional-232602"
	I1218 00:29:09.409988 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.409637 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:09.410125 1305484 addons.go:70] Setting default-storageclass=true in profile "functional-232602"
	I1218 00:29:09.410148 1305484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232602"
	I1218 00:29:09.410443 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.410469 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.418864 1305484 out.go:179] * Verifying Kubernetes components...
	I1218 00:29:09.421814 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:09.464044 1305484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 00:29:09.465759 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.465914 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.466265 1305484 addons.go:239] Setting addon default-storageclass=true in "functional-232602"
	I1218 00:29:09.466296 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.466740 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.466941 1305484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.466952 1305484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 00:29:09.466995 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.523535 1305484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:09.523562 1305484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 00:29:09.523638 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.539603 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.550039 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.631300 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:09.666484 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.687810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.394630 1305484 node_ready.go:35] waiting up to 6m0s for node "functional-232602" to be "Ready" ...
	I1218 00:29:10.394645 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.394905 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.394947 1305484 retry.go:31] will retry after 177.31527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.395055 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.395073 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395086 1305484 retry.go:31] will retry after 150.104012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395151 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.545905 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.572498 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.615825 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.615864 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.615882 1305484 retry.go:31] will retry after 386.236336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650773 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.650838 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650865 1305484 retry.go:31] will retry after 280.734601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.894991 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.895069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.932808 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.998277 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.998407 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.998429 1305484 retry.go:31] will retry after 660.849815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.003467 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.066495 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.066548 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.066567 1305484 retry.go:31] will retry after 792.514458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.395083 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.659960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:11.722453 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.722493 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.722511 1305484 retry.go:31] will retry after 472.801155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.859919 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.895517 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.895589 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.895884 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.931975 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.936172 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.936234 1305484 retry.go:31] will retry after 583.966469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.195539 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:12.255280 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.259094 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.259131 1305484 retry.go:31] will retry after 926.212833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.395399 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.395475 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.395812 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:12.395919 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:12.520996 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:12.581638 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.581728 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.581762 1305484 retry.go:31] will retry after 1.65494693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.895402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.186032 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:13.243730 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:13.248249 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.248281 1305484 retry.go:31] will retry after 1.192911742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.395563 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.395681 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.395976 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.895848 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.895954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.896330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:14.237854 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:14.298889 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.302600 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.302641 1305484 retry.go:31] will retry after 1.5263786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.395779 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.395871 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.396209 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:14.396293 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:14.441356 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:14.508115 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.508165 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.508184 1305484 retry.go:31] will retry after 3.305911776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.895890 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.896219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.394975 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.395415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.829900 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:15.892510 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:15.892556 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.892574 1305484 retry.go:31] will retry after 3.944012673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.895725 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.895798 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.896127 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.394873 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.394951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.395246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.894968 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.895399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:16.895481 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:17.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:17.814960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:17.873346 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:17.873415 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.873437 1305484 retry.go:31] will retry after 2.287204088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.895511 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.895833 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.395764 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.395845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.396148 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.895440 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:19.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.395328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:19.836815 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:19.891772 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895038 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.895109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.895501 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895520 1305484 retry.go:31] will retry after 2.272181462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.160871 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:20.233754 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:20.233805 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.233824 1305484 retry.go:31] will retry after 9.03130365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.395392 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.395710 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:20.894916 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.894992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:21.395041 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.395135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.395466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:21.395525 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:21.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.895012 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.168810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:22.226105 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:22.229620 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.229649 1305484 retry.go:31] will retry after 6.326012676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.895280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.395383 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.895360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:23.895414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:24.395042 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.395119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:24.895109 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.895188 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.395358 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.395437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.395700 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.895538 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.895612 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.895906 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:25.895954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:26.395465 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.395571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.395892 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:26.895653 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.895735 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.395741 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.395852 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.396210 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.895939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.896273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:27.896328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:28.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:28.556610 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:28.617128 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:28.617182 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.617202 1305484 retry.go:31] will retry after 6.797257953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.895668 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.895975 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.265354 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:29.327180 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:29.327227 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.327246 1305484 retry.go:31] will retry after 10.081474738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.395481 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.395821 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.895626 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.895701 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:30.395476 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.395558 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.395870 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:30.395928 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:30.895674 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.895771 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.896102 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.395677 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.395765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.396042 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.895800 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.895892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.896225 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:32.395871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.395946 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.396238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:32.396286 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:32.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.894971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.895221 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.894995 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.895096 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.895485 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:34.895540 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:35.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.395369 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:35.415065 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:35.470618 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:35.474707 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.474739 1305484 retry.go:31] will retry after 12.346765183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.894884 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.895217 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.895297 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:37.395715 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.395786 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.396036 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:37.396085 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:37.895882 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.895957 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.896282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.395072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.395404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.395085 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.395413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.409781 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:39.473091 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:39.473144 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.473164 1305484 retry.go:31] will retry after 18.475103934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.895826 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.896182 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:39.896239 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:40.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.394986 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:40.894982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.895057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.395197 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.395487 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.894953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:42.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.395341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:42.395398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:42.895053 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.394921 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.894994 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.895439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:44.395145 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:44.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:44.895223 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.895291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.395338 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.396157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.395277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:46.895417 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:47.395091 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.395170 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:47.821776 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:47.880326 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:47.883900 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.883932 1305484 retry.go:31] will retry after 18.240859758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.895204 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.895522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.895186 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.895530 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:48.895589 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:49.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:49.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.395307 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.395385 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.395702 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.895512 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.895597 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.895908 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:50.895965 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:51.395762 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.395833 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.396181 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:51.894896 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.895266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.395005 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.395321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:53.395068 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.395156 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.395497 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:53.395555 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:53.894871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.895228 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.395496 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.395573 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.895684 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.895759 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.896113 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.394869 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.394953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.395245 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.895404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:55.895459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:56.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.395302 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:56.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.895034 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.948848 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:58.011608 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:58.015264 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.015303 1305484 retry.go:31] will retry after 17.396243449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.394927 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.395242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:58.395294 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:58.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.395011 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.894993 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:00.395507 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.395593 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.395898 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:00.395950 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:00.894850 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.894938 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.394969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.395325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.895392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.395062 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.395142 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.395460 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:02.895401 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:03.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.395392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:03.894963 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.895041 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.895359 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.394991 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.894956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:05.395299 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.395380 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.395678 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:05.395727 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:05.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.125881 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:06.190863 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:06.190916 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.190936 1305484 retry.go:31] will retry after 24.931144034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.395236 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.395314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.395677 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.895467 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.895550 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.895878 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:07.395628 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.395697 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.395955 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:07.395997 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:07.895729 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.895808 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.896074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.395873 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.395948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.396313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.894868 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.895208 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.895287 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.895612 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:09.895672 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:10.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.395353 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.395606 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:10.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.394972 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.395053 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.894959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.895219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:12.394924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:12.395391 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:12.894984 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.894985 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.895388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.395015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.395307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.894872 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.894951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.895211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:14.895252 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:15.395260 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.395713 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:15.411948 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:15.467885 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:15.471996 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.472026 1305484 retry.go:31] will retry after 23.671964263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.895665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.895991 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.395769 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.395850 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.396115 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.894852 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.894935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.895261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:16.895324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:17.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.394932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:17.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.895123 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.895201 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.895524 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:18.895581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:19.395822 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.395905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.396165 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:19.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.395230 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.395313 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.395645 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.895295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:21.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:21.395450 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:21.895115 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.895196 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.895514 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.394905 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.895345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.395045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.895065 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:23.895477 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:24.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.394989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.395346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:24.895052 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.895137 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.895433 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.395347 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.395641 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.895358 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.895437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.895746 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:25.895805 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:26.395602 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.395686 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.396014 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:26.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.895844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.896146 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.394944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.894978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.895055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.895365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:28.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.395236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:28.395276 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:28.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.395540 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.395625 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.395953 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.894937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.895190 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:30.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.395255 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.395559 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:30.395614 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:30.895284 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.895370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.895692 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.123262 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:31.181409 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.184938 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.185056 1305484 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:31.395353 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.395427 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.395686 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.895522 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.895971 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:32.395780 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.395853 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.396133 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:32.396184 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:32.894842 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.894921 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.895187 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.394857 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.394937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.894971 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.895325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:34.895435 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:35.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.395839 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:35.895696 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.895778 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.896070 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.395851 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.395932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.396284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.895427 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:36.895485 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:37.395134 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.395209 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.395462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:37.894924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.895330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:39.144879 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:39.206506 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206561 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206652 1305484 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:39.209780 1305484 out.go:179] * Enabled addons: 
	I1218 00:30:39.213292 1305484 addons.go:530] duration metric: took 1m29.803748848s for enable addons: enabled=[]
	I1218 00:30:39.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:39.395343 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:39.895241 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.895315 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.895674 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.395346 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.395421 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.395699 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.895493 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.895927 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:41.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.395901 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.396304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:41.396363 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:41.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.895079 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.895335 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.394978 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.395300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:43.895429 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:44.395104 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.395180 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.395503 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:44.894907 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.894987 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.895277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.394949 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:46.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.395018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:46.395324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:46.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.895453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:48.394988 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.395066 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:48.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:48.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.895329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.394998 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:50.395234 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.395312 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.395669 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:50.395726 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:50.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.895541 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.895800 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.395565 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.395643 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.895820 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.896139 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:52.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.395866 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:52.396147 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:52.894845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.894930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.895239 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.895246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.395001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.895019 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.895132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.895462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:54.895517 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:55.395382 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.395459 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.395747 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:55.895567 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.896004 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.395794 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.395876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.396202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.894918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.895248 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:57.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:57.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:57.895089 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.895163 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.895506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.395467 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.895216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:59.895259 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:00.395510 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.395606 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.395915 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:00.895683 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.895763 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.896072 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.395863 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.395942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.396196 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.894969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:01.895364 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:02.395506 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.395587 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.395926 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:02.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.895787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.394835 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.394918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.395241 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:03.895409 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:04.394887 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.395203 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:04.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.895585 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.395452 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.395534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.895595 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.895675 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.895945 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:05.895986 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:06.395824 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.395899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.396242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:06.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.395035 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.395109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.894960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:08.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.395097 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.395422 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:08.395475 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:08.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.895185 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.895437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.394963 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.395425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.894995 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:10.395562 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:10.895006 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.895092 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.895441 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.395247 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.395326 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.395703 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.895773 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.895839 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:12.395833 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.395908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.396246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:12.396315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:12.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.894941 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.895004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.895326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.394884 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.395283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.894810 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.894876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.895171 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:14.895233 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:15.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.395266 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.395614 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:15.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.895319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.394906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.395230 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:16.895449 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:17.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.395260 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.395607 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:17.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.895160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.895445 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.895357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:19.395005 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:19.395376 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:19.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.395282 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.395364 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.395694 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.895475 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.895552 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.895809 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:21.395604 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.395678 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.395990 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:21.396041 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:21.895659 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.895733 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.896015 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.395655 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.395728 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.395992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.895435 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.895515 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.895848 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:23.395649 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.395732 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:23.396134 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:23.895883 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.895960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.896252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.894847 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.895271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.395154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.395412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.895068 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.895475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:25.895531 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:26.395075 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.395488 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:26.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.895250 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.395377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.895371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:28.395072 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:28.395459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:28.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.895034 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.395100 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.395520 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.894938 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:30.395237 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.395365 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.395704 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:30.395760 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:30.895519 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.895940 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.395676 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.395750 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.396048 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.895809 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.895895 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.896244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.394971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.894900 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:32.895326 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:33.394994 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.395070 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.395437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:33.895135 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.895535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.395882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.395954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.396208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:34.895368 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:35.395101 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:35.895173 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.895249 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.895577 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.394992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.395327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.895323 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:37.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.395252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:37.395302 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:37.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.895332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.895059 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.895134 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:39.394962 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.395049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.395388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:39.395443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:39.895187 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.895635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.395589 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.895352 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.395047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.895073 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.895149 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.895412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:41.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:42.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:42.895106 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.895183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.895531 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.394891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.895424 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:43.895479 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:44.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.395368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:44.895047 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.895117 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.895407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.395328 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.395422 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.395783 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.895608 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.895699 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.896131 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:45.896187 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:46.394880 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:46.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.895051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.395116 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.395191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.395557 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.894966 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.895047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:48.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:48.395424 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:48.895132 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.895327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:50.395224 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.395303 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:50.395707 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:50.895406 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.895483 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.395554 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.395639 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.395931 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.895695 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.895768 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:52.395729 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.395811 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.396079 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:52.396127 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:52.895894 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.895969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.896306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.395050 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.395150 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.895062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.895316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.395011 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.895320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:54.895366 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:55.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.395291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.395575 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:55.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.895409 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.394936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.895032 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.895105 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:56.895458 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:57.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:57.895074 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.895154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.895479 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.394862 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.395279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.894867 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.895307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:59.394852 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.394934 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:59.395339 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:59.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.895849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.896110 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.395197 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.395298 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.395737 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.895502 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.895905 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:01.395709 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.395787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.396061 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:01.396105 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:01.895861 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.895937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.896281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.894996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.895072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.895410 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:03.895469 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:04.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.395044 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.395298 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:04.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.395270 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.395588 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.895256 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.895330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.895578 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:05.895621 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:06.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:06.894980 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.895071 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.895448 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:08.394964 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.395043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.395361 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:08.395415 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:08.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.895046 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.895131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.895449 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:10.395311 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.395381 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.395635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:10.395676 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:10.895273 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.895354 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.895754 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.395292 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.395374 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.395675 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.895376 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.895441 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.895684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:12.395437 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.395517 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.395849 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:12.395904 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:12.895550 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.895627 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.895939 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.395711 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.395791 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.895885 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.895958 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.896301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.394930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.395206 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.894945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.895220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:14.895266 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:15.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.395306 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.395672 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:15.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.895349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.395013 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.895366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:16.895425 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:17.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.895274 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.895192 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:18.895550 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:19.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.395195 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:19.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.895024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.895370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.395299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.395647 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.894993 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.895294 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:21.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.395303 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:21.395350 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:21.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.895043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.394838 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.394910 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.395188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:23.395047 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.395131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.395465 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:23.395520 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:23.894892 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.894964 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.895362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:25.395258 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.395335 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.395602 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:25.395653 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:25.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.895416 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.395052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.895075 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.895415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.394948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.895356 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:27.895426 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:28.395097 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.395171 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.395489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:28.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.895036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.395111 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.395193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.895559 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.895634 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.895935 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:29.895990 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:30.395759 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.395836 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.396159 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:30.894851 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.894931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.395281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:32.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.395132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:32.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:32.894897 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.895317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.395060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.895211 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.895286 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.895620 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.394801 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.394869 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.395114 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.894830 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.894907 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.895223 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:34.895273 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:35.395130 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:35.895126 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.895205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:36.895398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:37.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.394969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.395292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:37.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.394915 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.394990 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.895094 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.895411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:38.895465 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:39.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:39.895143 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.895225 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.395286 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.395370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.395636 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:41.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:41.395439 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:41.894881 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.894976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.394999 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.395081 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.395442 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.895025 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.895106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.895432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.394888 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.394966 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.395216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:43.895348 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:44.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:44.894831 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.894908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.895175 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.395389 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.395497 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.395880 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.895561 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.895646 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.895997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:45.896056 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:46.395702 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.395785 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.396046 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:46.895863 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.895935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.896257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.395439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:48.395027 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:48.395498 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:48.895164 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.895243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.895582 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.395264 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.395597 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.895474 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.895557 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:50.395724 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.395800 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.396111 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:50.396169 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:50.895876 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.895947 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.896202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.395401 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.895200 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.895548 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.395025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:52.895410 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:53.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.395162 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:53.895715 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.895783 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.896041 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.395464 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.395544 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.395863 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.895501 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:54.895971 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:55.395850 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.395924 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.396188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:55.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.895296 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.395115 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.395513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.895193 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.895259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.895583 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:57.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.395358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:57.395413 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:57.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.395771 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.395843 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.396103 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.895868 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.895950 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.896279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.394988 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.395315 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.895060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:59.895473 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:00.395531 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.395633 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:00.894904 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.895313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.394991 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.395320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.895358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:02.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.395021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.395373 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:02.395430 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:02.895092 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.895164 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.395411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.895093 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.394889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.395259 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.894989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:04.895395 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:05.395163 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.395243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.395682 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:05.895450 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.895524 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.895784 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.395568 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.395656 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.395978 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.895794 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.895874 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.896211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:06.896271 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:07.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:07.894962 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.895397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.394973 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.395407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.895172 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.895469 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:09.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:09.395444 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:09.895137 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.895212 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.895526 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.395259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.395579 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.895391 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.895474 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.895867 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:11.395660 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.395744 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.396081 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:11.396140 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:11.895822 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.895896 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.896157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.394896 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.394973 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.395034 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.395107 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.895385 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:13.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:14.395141 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.395215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:14.895214 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.895295 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.895592 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.395316 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.395398 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.395758 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.895576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.895992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:15.896047 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:16.395754 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.396096 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:16.895867 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.895943 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.896286 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.395428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.895235 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:18.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.395037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:18.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:18.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.395272 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.895438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:20.395201 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.395308 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.395646 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:20.395698 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:20.895422 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.895490 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.395521 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.395598 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.395947 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.895610 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.895689 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.896027 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:22.395778 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.395849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.396108 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:22.396151 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:22.894879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.894954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.895254 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.895018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.395023 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.395432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:24.895433 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:25.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:25.895136 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.895539 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.395250 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.395706 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.895534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.895793 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:26.895834 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:27.395582 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.395665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.396005 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:27.895686 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.895765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.896121 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.395755 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.396080 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.895931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.896264 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:28.896319 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:29.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.395342 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:29.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.895400 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.395313 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.395390 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.395741 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.895528 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.895610 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.895946 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:31.395576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.395644 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.395889 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:31.395930 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:31.895675 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.895753 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.896082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.394834 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.894964 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.895091 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.895177 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:33.895563 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:34.394882 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.394955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:34.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.395153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.894873 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.895257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:36.394950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.395348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:36.395402 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:36.895071 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.895476 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.395268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.395002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.895305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:38.895353 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:39.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:39.895212 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.895299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.895609 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.395293 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.395361 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.395613 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.895328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:40.895383 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:41.395069 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.395147 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.395453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:41.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.895138 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.895215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.895542 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:42.895601 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:43.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:43.895604 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.895677 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.395290 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.395367 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.395718 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.895507 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.895582 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.895842 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:44.895892 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:45.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:45.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.395070 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.395160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.395494 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.894943 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.895019 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:47.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.395069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.395419 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:47.395483 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:47.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.894965 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.895236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.394934 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.395366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.895481 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:49.395814 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.395888 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.396152 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:49.396201 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:49.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.395242 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.395323 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.395662 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.894942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.895212 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.895127 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.895213 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.895688 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:51.895762 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:52.395524 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.395609 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.395929 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:52.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.895845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.896160 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.395295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.894861 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:54.394811 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.394887 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.395224 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:54.395284 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:54.895871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.895944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.896276 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.395236 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.895000 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.895285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:56.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:56.395441 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:56.895820 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.895899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.896155 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.894987 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.895413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:58.395076 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.395146 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.395477 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:58.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:58.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.395049 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.395125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.894984 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:00.395314 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.395415 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.395786 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:00.395854 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:00.895591 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.895666 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.896029 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.395664 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.395997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.895814 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.895904 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.896249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.394968 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.395421 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.895193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.895464 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:02.895507 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:03.395162 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.395245 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.395584 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:03.895306 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.895387 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.395125 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.395233 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.395547 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.895240 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.895314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.895659 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:04.895713 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:05.395523 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.395602 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:05.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.895784 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.896083 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.395846 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.395920 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.396255 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.894862 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:07.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.395319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:07.395361 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:07.895013 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.895141 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.895473 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.395190 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.395601 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.895088 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.895159 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:09.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.395397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:09.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:09.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.395240 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.395490 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.895174 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.895254 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:11.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.395429 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:11.395490 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:11.895021 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.895089 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.395645 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.395720 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.396082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.895753 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.895830 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.896143 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.394854 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.394925 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.395193 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.895010 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.895299 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:13.895347 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:14.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:14.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.895129 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.395317 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.395394 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.395684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.895487 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.895571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.895903 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:15.895957 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:16.395670 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.395998 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:16.895851 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.895945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.896285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.395074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.895249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:18.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.395317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:18.395371 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:18.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.895376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.394872 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.895389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:20.395179 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.395604 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:20.395662 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:20.894898 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.895244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.395016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.894952 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.394923 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.395310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:22.895406 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:23.395099 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.395522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:23.895196 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.895267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.394919 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.394997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.395328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.895049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:24.895443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:25.395131 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.395205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.395456 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:25.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.895301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:27.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.395004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:27.395386 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:27.895785 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.895857 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.896201 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.394885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.395288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:29.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.395527 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:29.395588 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:29.894812 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.894881 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.895140 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.395146 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.395230 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.395562 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.894965 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.895039 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.895125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.895444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:31.895519 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:32.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:32.895139 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.895468 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.394926 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.895321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:34.394900 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.394970 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.395227 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:34.395268 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:34.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.395150 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.395242 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.395581 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.895262 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.895333 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.895655 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:36.395446 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.395526 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.395891 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:36.395954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:36.895879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.896025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.896489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.395256 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.395590 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.395094 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.395175 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:38.895318 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:39.394981 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:39.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.395255 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.395330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.395611 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.895417 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.895495 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.895856 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:40.895911 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:41.395671 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.395749 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.396075 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:41.895770 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.895842 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.394861 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.394945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:43.394987 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.395349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:43.395397 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:43.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.895331 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.395167 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.395534 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.895001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:45.395381 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.395465 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.395835 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:45.395899 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:45.895622 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.895696 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.896010 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.395697 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.395815 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.396068 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.895828 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.895903 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.896238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.394829 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.394914 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.395208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.894909 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.895256 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:47.895315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:48.394935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.395013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.895191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.395252 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.395319 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.395570 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.895468 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.895542 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.895868 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:49.895924 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:50.395784 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.395860 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.396189 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:50.895823 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.895905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.896170 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.394877 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.394954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.395305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.895290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:52.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.394961 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.395282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:52.395333 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:52.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.895119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.895493 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.395297 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.395619 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.894885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.894963 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.895214 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:54.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.395306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:54.395365 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:54.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.395135 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.395210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:56.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.395029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:56.395422 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:56.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.895133 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.895056 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.895135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.895491 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:58.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.395253 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.395564 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:58.395616 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:58.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.895017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.395042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.894955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.895253 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:00.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.395351 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:00.395696 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:00.895585 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.895660 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.895999 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.395773 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.395844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.396106 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.895887 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.895974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.896290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.394993 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.395076 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.395438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.895141 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.895226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.895545 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:02.895597 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.395370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:03.895085 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.895169 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.895513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.395827 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.395892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.396191 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:05.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.395239 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:05.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:05.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.895226 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.395376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.395122 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.395495 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:07.895403 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:08.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:08.895048 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.895123 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.895471 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.395187 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.395657 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.895568 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.895676 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.896021 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:09.896082 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:10.395155 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:10.395216 1305484 node_ready.go:38] duration metric: took 6m0.000503053s for node "functional-232602" to be "Ready" ...
	I1218 00:35:10.402744 1305484 out.go:203] 
	W1218 00:35:10.405748 1305484 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 00:35:10.405971 1305484 out.go:285] * 
	W1218 00:35:10.408384 1305484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:35:10.411337 1305484 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866402430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866417380Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866460874Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866476414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866485874Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866499339Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866509103Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866525422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866545812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866575858Z" level=info msg="Connect containerd service"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.866870260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.867476540Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886162821Z" level=info msg="Start subscribing containerd event"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886328101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886539920Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.886400657Z" level=info msg="Start recovering state"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.944741362Z" level=info msg="Start event monitor"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.944959236Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945049663Z" level=info msg="Start streaming server"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945134772Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945359522Z" level=info msg="runtime interface starting up..."
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945434680Z" level=info msg="starting plugins..."
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.945497316Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 00:29:07 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 00:29:07 functional-232602 containerd[5205]: time="2025-12-18T00:29:07.947920852Z" level=info msg="containerd successfully booted in 0.102488s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:35:14.379605    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:14.380102    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:14.381567    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:14.381916    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:14.383348    8555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:35:14 up  7:17,  0 user,  load average: 0.11, 0.24, 0.64
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 18 00:35:11 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:11 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:11 functional-232602 kubelet[8363]: E1218 00:35:11.957381    8363 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:11 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:12 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 18 00:35:12 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:12 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:12 functional-232602 kubelet[8427]: E1218 00:35:12.711528    8427 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:12 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:12 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:13 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 18 00:35:13 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:13 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:13 functional-232602 kubelet[8455]: E1218 00:35:13.449835    8455 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:13 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:13 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:14 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 18 00:35:14 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:14 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:14 functional-232602 kubelet[8501]: E1218 00:35:14.191680    8501 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:14 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:14 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (380.076104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 kubectl -- --context functional-232602 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 kubectl -- --context functional-232602 get pods: exit status 1 (142.277874ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-232602 kubectl -- --context functional-232602 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (310.607996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-739047 image ls --format short --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh     │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image   │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete  │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start   │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ start   │ -p functional-232602 --alsologtostderr -v=8                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add minikube-local-cache-test:functional-232602                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache delete minikube-local-cache-test:functional-232602                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl images                                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-232602 cache reload                                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ kubectl │ functional-232602 kubectl -- --context functional-232602 get pods                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:29:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:29:05.243654 1305484 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:29:05.243837 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.243867 1305484 out.go:374] Setting ErrFile to fd 2...
	I1218 00:29:05.243888 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.244277 1305484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:29:05.244868 1305484 out.go:368] Setting JSON to false
	I1218 00:29:05.245808 1305484 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25892,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:29:05.245939 1305484 start.go:143] virtualization:  
	I1218 00:29:05.249423 1305484 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:29:05.253059 1305484 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:29:05.253187 1305484 notify.go:221] Checking for updates...
	I1218 00:29:05.259241 1305484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:29:05.262171 1305484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:05.265173 1305484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:29:05.268135 1305484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:29:05.270950 1305484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:29:05.274293 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:05.274440 1305484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:29:05.308275 1305484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:29:05.308407 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.375725 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.366230286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.375834 1305484 docker.go:319] overlay module found
	I1218 00:29:05.378939 1305484 out.go:179] * Using the docker driver based on existing profile
	I1218 00:29:05.381619 1305484 start.go:309] selected driver: docker
	I1218 00:29:05.381657 1305484 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.381752 1305484 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:29:05.381892 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.440724 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.431205912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.441147 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:05.441215 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:05.441270 1305484 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.444475 1305484 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:29:05.447488 1305484 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:29:05.450519 1305484 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:29:05.453580 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:05.453631 1305484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:29:05.453641 1305484 cache.go:65] Caching tarball of preloaded images
	I1218 00:29:05.453681 1305484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:29:05.453745 1305484 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:29:05.453756 1305484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:29:05.453862 1305484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:29:05.474116 1305484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:29:05.474140 1305484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:29:05.474160 1305484 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:29:05.474205 1305484 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:29:05.474271 1305484 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "functional-232602"
	I1218 00:29:05.474294 1305484 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:29:05.474305 1305484 fix.go:54] fixHost starting: 
	I1218 00:29:05.474585 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:05.494473 1305484 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:29:05.494511 1305484 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:29:05.497625 1305484 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:29:05.497657 1305484 machine.go:94] provisionDockerMachine start ...
	I1218 00:29:05.497756 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.514682 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.515020 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.515044 1305484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:29:05.668376 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.668400 1305484 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:29:05.668465 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.700140 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.700482 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.700495 1305484 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:29:05.865944 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.866034 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.884487 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.884983 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.885010 1305484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:29:06.041516 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:29:06.041541 1305484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:29:06.041561 1305484 ubuntu.go:190] setting up certificates
	I1218 00:29:06.041572 1305484 provision.go:84] configureAuth start
	I1218 00:29:06.041652 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.060898 1305484 provision.go:143] copyHostCerts
	I1218 00:29:06.060951 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.060994 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:29:06.061002 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.061080 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:29:06.061163 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061182 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:29:06.061187 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061215 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:29:06.061256 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061273 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:29:06.061277 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061301 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:29:06.061349 1305484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:29:06.177802 1305484 provision.go:177] copyRemoteCerts
	I1218 00:29:06.177898 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:29:06.177967 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.195440 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.308765 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 00:29:06.308835 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:29:06.326972 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 00:29:06.327095 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:29:06.345137 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 00:29:06.345225 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:29:06.363588 1305484 provision.go:87] duration metric: took 321.991809ms to configureAuth
	I1218 00:29:06.363617 1305484 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:29:06.363812 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:06.363826 1305484 machine.go:97] duration metric: took 866.163062ms to provisionDockerMachine
	I1218 00:29:06.363833 1305484 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:29:06.363845 1305484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:29:06.363904 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:29:06.363949 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.381445 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.493044 1305484 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:29:06.496574 1305484 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1218 00:29:06.496595 1305484 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1218 00:29:06.496599 1305484 command_runner.go:130] > VERSION_ID="12"
	I1218 00:29:06.496604 1305484 command_runner.go:130] > VERSION="12 (bookworm)"
	I1218 00:29:06.496612 1305484 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1218 00:29:06.496615 1305484 command_runner.go:130] > ID=debian
	I1218 00:29:06.496641 1305484 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1218 00:29:06.496649 1305484 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1218 00:29:06.496655 1305484 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1218 00:29:06.496744 1305484 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:29:06.496762 1305484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:29:06.496773 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:29:06.496837 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:29:06.496920 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:29:06.496932 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /etc/ssl/certs/12611482.pem
	I1218 00:29:06.497013 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:29:06.497022 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> /etc/test/nested/copy/1261148/hosts
	I1218 00:29:06.497083 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:29:06.504772 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:06.523736 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:29:06.542759 1305484 start.go:296] duration metric: took 178.908993ms for postStartSetup
	I1218 00:29:06.542856 1305484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:29:06.542901 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.560753 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.665778 1305484 command_runner.go:130] > 18%
	I1218 00:29:06.665854 1305484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:29:06.671095 1305484 command_runner.go:130] > 160G
	I1218 00:29:06.671651 1305484 fix.go:56] duration metric: took 1.19734099s for fixHost
	I1218 00:29:06.671671 1305484 start.go:83] releasing machines lock for "functional-232602", held for 1.197387766s
	I1218 00:29:06.671738 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.688941 1305484 ssh_runner.go:195] Run: cat /version.json
	I1218 00:29:06.689003 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.689377 1305484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:29:06.689435 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.710307 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.721003 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.812429 1305484 command_runner.go:130] > {"iso_version": "v1.37.0-1765846775-22141", "kicbase_version": "v0.0.48-1765966054-22186", "minikube_version": "v1.37.0", "commit": "c344550999bcbb78f38b2df057224788bb2d30b2"}
	I1218 00:29:06.812585 1305484 ssh_runner.go:195] Run: systemctl --version
	I1218 00:29:06.910410 1305484 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 00:29:06.913301 1305484 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1218 00:29:06.913347 1305484 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1218 00:29:06.913421 1305484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 00:29:06.917811 1305484 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 00:29:06.917849 1305484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:29:06.917931 1305484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:29:06.925837 1305484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:29:06.925861 1305484 start.go:496] detecting cgroup driver to use...
	I1218 00:29:06.925891 1305484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:29:06.925936 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:29:06.941416 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:29:06.954870 1305484 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:29:06.954953 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:29:06.971407 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:29:06.985680 1305484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:29:07.097075 1305484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:29:07.240817 1305484 docker.go:234] disabling docker service ...
	I1218 00:29:07.240965 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:29:07.256804 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:29:07.271026 1305484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:29:07.407005 1305484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:29:07.534286 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:29:07.548592 1305484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:29:07.562819 1305484 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 00:29:07.564071 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:29:07.574541 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:29:07.583515 1305484 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:29:07.583615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:29:07.592330 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.601414 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:29:07.610399 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.619445 1305484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:29:07.627615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:29:07.637099 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:29:07.646771 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:29:07.656000 1305484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:29:07.663026 1305484 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 00:29:07.664029 1305484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:29:07.671707 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:07.789368 1305484 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:29:07.948156 1305484 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:29:07.948230 1305484 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:29:07.952108 1305484 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1218 00:29:07.952130 1305484 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 00:29:07.952136 1305484 command_runner.go:130] > Device: 0,72	Inode: 1611        Links: 1
	I1218 00:29:07.952144 1305484 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:07.952150 1305484 command_runner.go:130] > Access: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952154 1305484 command_runner.go:130] > Modify: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952160 1305484 command_runner.go:130] > Change: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952164 1305484 command_runner.go:130] >  Birth: -
	I1218 00:29:07.952461 1305484 start.go:564] Will wait 60s for crictl version
	I1218 00:29:07.952520 1305484 ssh_runner.go:195] Run: which crictl
	I1218 00:29:07.958389 1305484 command_runner.go:130] > /usr/local/bin/crictl
	I1218 00:29:07.959041 1305484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:29:07.980682 1305484 command_runner.go:130] > Version:  0.1.0
	I1218 00:29:07.980702 1305484 command_runner.go:130] > RuntimeName:  containerd
	I1218 00:29:07.980709 1305484 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1218 00:29:07.980714 1305484 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 00:29:07.982988 1305484 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:29:07.983059 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.002890 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.002977 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.027238 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.034949 1305484 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:29:08.037919 1305484 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:29:08.055210 1305484 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:29:08.059294 1305484 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1218 00:29:08.059421 1305484 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:29:08.059535 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:08.059617 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.084496 1305484 command_runner.go:130] > {
	I1218 00:29:08.084519 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.084525 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084534 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.084540 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084546 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.084550 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084554 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084566 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.084574 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084578 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.084582 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084589 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084593 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084596 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084609 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.084616 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084642 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.084646 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084651 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084659 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.084666 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084671 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.084678 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084682 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084686 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084689 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084696 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.084705 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084716 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.084722 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084731 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084739 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.084751 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084756 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.084760 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.084764 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084768 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084777 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084786 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.084791 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084802 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.084805 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084810 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084818 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.084824 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084829 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.084835 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084839 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084851 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084855 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084860 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084863 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084868 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084876 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.084883 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084888 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.084892 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084896 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084905 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.084917 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084922 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.084929 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084943 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084946 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084957 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084961 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084965 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084968 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084975 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.084983 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084991 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.084998 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085003 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085019 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.085026 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085033 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.085037 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085041 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085044 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085050 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085054 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085057 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085060 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085067 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.085073 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085078 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.085084 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085088 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085106 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.085110 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085114 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.085124 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085128 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085132 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085138 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085148 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.085153 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085160 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.085166 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085170 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085182 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.085191 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085195 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.085199 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085203 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085206 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085224 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085228 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085231 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085235 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085244 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.085252 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085258 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.085264 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085270 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085278 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.085287 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085291 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.085296 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085300 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.085306 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085313 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085317 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.085320 1305484 command_runner.go:130] >     }
	I1218 00:29:08.085323 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.085325 1305484 command_runner.go:130] > }
	I1218 00:29:08.087939 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.087964 1305484 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:29:08.088036 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.111236 1305484 command_runner.go:130] > {
	I1218 00:29:08.111264 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.111269 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111279 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.111286 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111295 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.111298 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111302 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111311 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.111318 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111322 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.111330 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111334 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111337 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111340 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111347 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.111352 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111358 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.111364 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111368 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111379 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.111391 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111396 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.111400 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111404 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111407 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111410 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111417 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.111421 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111426 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.111429 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111437 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111447 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.111454 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111462 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.111467 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.111475 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111478 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111483 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111491 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.111499 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111504 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.111507 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111511 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111519 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.111522 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111527 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.111533 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111537 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111543 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111547 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111559 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111562 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111565 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111573 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.111580 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111585 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.111588 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111592 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111600 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.111606 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111611 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.111617 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111626 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111632 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111635 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111639 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111646 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111652 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111659 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.111662 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111668 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.111671 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111676 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111690 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.111697 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111701 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.111707 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111711 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111716 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111720 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111739 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111742 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111746 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111755 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.111759 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111768 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.111771 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111775 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111785 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.111798 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111802 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.111805 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111809 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111813 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111816 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111825 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.111835 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111840 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.111843 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111855 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111866 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.111872 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111876 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.111880 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111884 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111889 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111893 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111899 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111903 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111913 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111921 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.111925 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111929 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.111933 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111937 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111947 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.111959 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111963 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.111967 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111971 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.111978 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111982 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111989 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.111992 1305484 command_runner.go:130] >     }
	I1218 00:29:08.112001 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.112004 1305484 command_runner.go:130] > }
	I1218 00:29:08.114369 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.114392 1305484 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:29:08.114401 1305484 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:29:08.114566 1305484 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:29:08.114639 1305484 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:29:08.137373 1305484 command_runner.go:130] > {
	I1218 00:29:08.137395 1305484 command_runner.go:130] >   "cniconfig": {
	I1218 00:29:08.137400 1305484 command_runner.go:130] >     "Networks": [
	I1218 00:29:08.137405 1305484 command_runner.go:130] >       {
	I1218 00:29:08.137411 1305484 command_runner.go:130] >         "Config": {
	I1218 00:29:08.137420 1305484 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1218 00:29:08.137425 1305484 command_runner.go:130] >           "Name": "cni-loopback",
	I1218 00:29:08.137430 1305484 command_runner.go:130] >           "Plugins": [
	I1218 00:29:08.137433 1305484 command_runner.go:130] >             {
	I1218 00:29:08.137438 1305484 command_runner.go:130] >               "Network": {
	I1218 00:29:08.137442 1305484 command_runner.go:130] >                 "ipam": {},
	I1218 00:29:08.137452 1305484 command_runner.go:130] >                 "type": "loopback"
	I1218 00:29:08.137456 1305484 command_runner.go:130] >               },
	I1218 00:29:08.137463 1305484 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1218 00:29:08.137467 1305484 command_runner.go:130] >             }
	I1218 00:29:08.137470 1305484 command_runner.go:130] >           ],
	I1218 00:29:08.137483 1305484 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1218 00:29:08.137489 1305484 command_runner.go:130] >         },
	I1218 00:29:08.137494 1305484 command_runner.go:130] >         "IFName": "lo"
	I1218 00:29:08.137498 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137503 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137508 1305484 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1218 00:29:08.137515 1305484 command_runner.go:130] >     "PluginDirs": [
	I1218 00:29:08.137519 1305484 command_runner.go:130] >       "/opt/cni/bin"
	I1218 00:29:08.137522 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137526 1305484 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1218 00:29:08.137529 1305484 command_runner.go:130] >     "Prefix": "eth"
	I1218 00:29:08.137533 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137536 1305484 command_runner.go:130] >   "config": {
	I1218 00:29:08.137540 1305484 command_runner.go:130] >     "cdiSpecDirs": [
	I1218 00:29:08.137544 1305484 command_runner.go:130] >       "/etc/cdi",
	I1218 00:29:08.137554 1305484 command_runner.go:130] >       "/var/run/cdi"
	I1218 00:29:08.137569 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137573 1305484 command_runner.go:130] >     "cni": {
	I1218 00:29:08.137576 1305484 command_runner.go:130] >       "binDir": "",
	I1218 00:29:08.137580 1305484 command_runner.go:130] >       "binDirs": [
	I1218 00:29:08.137584 1305484 command_runner.go:130] >         "/opt/cni/bin"
	I1218 00:29:08.137587 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.137591 1305484 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1218 00:29:08.137595 1305484 command_runner.go:130] >       "confTemplate": "",
	I1218 00:29:08.137598 1305484 command_runner.go:130] >       "ipPref": "",
	I1218 00:29:08.137602 1305484 command_runner.go:130] >       "maxConfNum": 1,
	I1218 00:29:08.137606 1305484 command_runner.go:130] >       "setupSerially": false,
	I1218 00:29:08.137610 1305484 command_runner.go:130] >       "useInternalLoopback": false
	I1218 00:29:08.137613 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137620 1305484 command_runner.go:130] >     "containerd": {
	I1218 00:29:08.137627 1305484 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1218 00:29:08.137632 1305484 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1218 00:29:08.137639 1305484 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1218 00:29:08.137645 1305484 command_runner.go:130] >       "runtimes": {
	I1218 00:29:08.137648 1305484 command_runner.go:130] >         "runc": {
	I1218 00:29:08.137654 1305484 command_runner.go:130] >           "ContainerAnnotations": null,
	I1218 00:29:08.137665 1305484 command_runner.go:130] >           "PodAnnotations": null,
	I1218 00:29:08.137670 1305484 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1218 00:29:08.137674 1305484 command_runner.go:130] >           "cgroupWritable": false,
	I1218 00:29:08.137679 1305484 command_runner.go:130] >           "cniConfDir": "",
	I1218 00:29:08.137685 1305484 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1218 00:29:08.137689 1305484 command_runner.go:130] >           "io_type": "",
	I1218 00:29:08.137695 1305484 command_runner.go:130] >           "options": {
	I1218 00:29:08.137699 1305484 command_runner.go:130] >             "BinaryName": "",
	I1218 00:29:08.137703 1305484 command_runner.go:130] >             "CriuImagePath": "",
	I1218 00:29:08.137707 1305484 command_runner.go:130] >             "CriuWorkPath": "",
	I1218 00:29:08.137710 1305484 command_runner.go:130] >             "IoGid": 0,
	I1218 00:29:08.137715 1305484 command_runner.go:130] >             "IoUid": 0,
	I1218 00:29:08.137726 1305484 command_runner.go:130] >             "NoNewKeyring": false,
	I1218 00:29:08.137734 1305484 command_runner.go:130] >             "Root": "",
	I1218 00:29:08.137738 1305484 command_runner.go:130] >             "ShimCgroup": "",
	I1218 00:29:08.137742 1305484 command_runner.go:130] >             "SystemdCgroup": false
	I1218 00:29:08.137746 1305484 command_runner.go:130] >           },
	I1218 00:29:08.137752 1305484 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1218 00:29:08.137761 1305484 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1218 00:29:08.137764 1305484 command_runner.go:130] >           "runtimePath": "",
	I1218 00:29:08.137770 1305484 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1218 00:29:08.137780 1305484 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1218 00:29:08.137784 1305484 command_runner.go:130] >           "snapshotter": ""
	I1218 00:29:08.137787 1305484 command_runner.go:130] >         }
	I1218 00:29:08.137790 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137794 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137804 1305484 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1218 00:29:08.137817 1305484 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1218 00:29:08.137822 1305484 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1218 00:29:08.137828 1305484 command_runner.go:130] >     "disableApparmor": false,
	I1218 00:29:08.137835 1305484 command_runner.go:130] >     "disableHugetlbController": true,
	I1218 00:29:08.137840 1305484 command_runner.go:130] >     "disableProcMount": false,
	I1218 00:29:08.137844 1305484 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1218 00:29:08.137853 1305484 command_runner.go:130] >     "enableCDI": true,
	I1218 00:29:08.137857 1305484 command_runner.go:130] >     "enableSelinux": false,
	I1218 00:29:08.137862 1305484 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1218 00:29:08.137866 1305484 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1218 00:29:08.137871 1305484 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1218 00:29:08.137878 1305484 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1218 00:29:08.137882 1305484 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1218 00:29:08.137887 1305484 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1218 00:29:08.137894 1305484 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1218 00:29:08.137901 1305484 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137906 1305484 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1218 00:29:08.137921 1305484 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137929 1305484 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1218 00:29:08.137940 1305484 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1218 00:29:08.137943 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137947 1305484 command_runner.go:130] >   "features": {
	I1218 00:29:08.137952 1305484 command_runner.go:130] >     "supplemental_groups_policy": true
	I1218 00:29:08.137955 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137962 1305484 command_runner.go:130] >   "golang": "go1.24.9",
	I1218 00:29:08.137972 1305484 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137984 1305484 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137998 1305484 command_runner.go:130] >   "runtimeHandlers": [
	I1218 00:29:08.138001 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138005 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138009 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138019 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138022 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138025 1305484 command_runner.go:130] >     },
	I1218 00:29:08.138028 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138043 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138048 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138053 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138056 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138060 1305484 command_runner.go:130] >       "name": "runc"
	I1218 00:29:08.138065 1305484 command_runner.go:130] >     }
	I1218 00:29:08.138069 1305484 command_runner.go:130] >   ],
	I1218 00:29:08.138074 1305484 command_runner.go:130] >   "status": {
	I1218 00:29:08.138078 1305484 command_runner.go:130] >     "conditions": [
	I1218 00:29:08.138089 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138093 1305484 command_runner.go:130] >         "message": "",
	I1218 00:29:08.138097 1305484 command_runner.go:130] >         "reason": "",
	I1218 00:29:08.138101 1305484 command_runner.go:130] >         "status": true,
	I1218 00:29:08.138112 1305484 command_runner.go:130] >         "type": "RuntimeReady"
	I1218 00:29:08.138115 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138118 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138128 1305484 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1218 00:29:08.138137 1305484 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1218 00:29:08.138140 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138147 1305484 command_runner.go:130] >         "type": "NetworkReady"
	I1218 00:29:08.138150 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138155 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138178 1305484 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1218 00:29:08.138187 1305484 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1218 00:29:08.138192 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138197 1305484 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1218 00:29:08.138203 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138206 1305484 command_runner.go:130] >     ]
	I1218 00:29:08.138209 1305484 command_runner.go:130] >   }
	I1218 00:29:08.138212 1305484 command_runner.go:130] > }
	I1218 00:29:08.140863 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:08.140888 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:08.140910 1305484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:29:08.140937 1305484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:29:08.141052 1305484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:29:08.141124 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:29:08.148733 1305484 command_runner.go:130] > kubeadm
	I1218 00:29:08.148755 1305484 command_runner.go:130] > kubectl
	I1218 00:29:08.148759 1305484 command_runner.go:130] > kubelet
	I1218 00:29:08.149813 1305484 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:29:08.149929 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:29:08.157899 1305484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:29:08.171631 1305484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:29:08.185534 1305484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 00:29:08.199213 1305484 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:29:08.203261 1305484 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1218 00:29:08.203343 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:08.317482 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:08.643734 1305484 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:29:08.643804 1305484 certs.go:195] generating shared ca certs ...
	I1218 00:29:08.643833 1305484 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:08.644029 1305484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:29:08.644119 1305484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:29:08.644145 1305484 certs.go:257] generating profile certs ...
	I1218 00:29:08.644307 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:29:08.644441 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:29:08.644531 1305484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:29:08.644560 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 00:29:08.644603 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 00:29:08.644662 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 00:29:08.644693 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 00:29:08.644737 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 00:29:08.644768 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 00:29:08.644809 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 00:29:08.644841 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 00:29:08.644932 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:29:08.645003 1305484 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:29:08.645041 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:29:08.645094 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:29:08.645151 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:29:08.645217 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:29:08.645309 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:08.645380 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.645420 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.645463 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem -> /usr/share/ca-certificates/1261148.pem
	I1218 00:29:08.646318 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:29:08.666060 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:29:08.685232 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:29:08.704134 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:29:08.723554 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:29:08.741698 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:29:08.759300 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:29:08.777293 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:29:08.794355 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:29:08.812054 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:29:08.830087 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:29:08.847372 1305484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:29:08.860094 1305484 ssh_runner.go:195] Run: openssl version
	I1218 00:29:08.866090 1305484 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1218 00:29:08.866507 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.874034 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:29:08.881757 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885459 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885707 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885773 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.926478 1305484 command_runner.go:130] > 3ec20f2e
	I1218 00:29:08.926977 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:29:08.934462 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.941654 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:29:08.949245 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953111 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953171 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953238 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.993847 1305484 command_runner.go:130] > b5213941
	I1218 00:29:08.994434 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:29:09.002229 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.011682 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:29:09.020345 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025298 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025353 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025405 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.072271 1305484 command_runner.go:130] > 51391683
	I1218 00:29:09.072867 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:29:09.081208 1305484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085518 1305484 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085547 1305484 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1218 00:29:09.085554 1305484 command_runner.go:130] > Device: 259,1	Inode: 2346127     Links: 1
	I1218 00:29:09.085561 1305484 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:09.085576 1305484 command_runner.go:130] > Access: 2025-12-18 00:25:01.733890088 +0000
	I1218 00:29:09.085582 1305484 command_runner.go:130] > Modify: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085594 1305484 command_runner.go:130] > Change: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085606 1305484 command_runner.go:130] >  Birth: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085761 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:29:09.130673 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.131215 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:29:09.179276 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.179949 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:29:09.226958 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.227517 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:29:09.269182 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.269731 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:29:09.310659 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.311193 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:29:09.352162 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.352228 1305484 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:09.352303 1305484 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:29:09.352361 1305484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:29:09.379004 1305484 cri.go:89] found id: ""
	I1218 00:29:09.379101 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:29:09.386224 1305484 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 00:29:09.386247 1305484 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 00:29:09.386254 1305484 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 00:29:09.387165 1305484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:29:09.387182 1305484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:29:09.387261 1305484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:29:09.396523 1305484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:29:09.396996 1305484 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.397115 1305484 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232602" cluster setting kubeconfig missing "functional-232602" context setting]
	I1218 00:29:09.397401 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.397832 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.398029 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.398566 1305484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1218 00:29:09.398586 1305484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1218 00:29:09.398591 1305484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1218 00:29:09.398599 1305484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1218 00:29:09.398604 1305484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1218 00:29:09.398644 1305484 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1218 00:29:09.398857 1305484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:29:09.408050 1305484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1218 00:29:09.408132 1305484 kubeadm.go:602] duration metric: took 20.943322ms to restartPrimaryControlPlane
	I1218 00:29:09.408155 1305484 kubeadm.go:403] duration metric: took 55.931707ms to StartCluster
	I1218 00:29:09.408213 1305484 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.408302 1305484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.409063 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.409379 1305484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 00:29:09.409544 1305484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 00:29:09.409943 1305484 addons.go:70] Setting storage-provisioner=true in profile "functional-232602"
	I1218 00:29:09.409964 1305484 addons.go:239] Setting addon storage-provisioner=true in "functional-232602"
	I1218 00:29:09.409988 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.409637 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:09.410125 1305484 addons.go:70] Setting default-storageclass=true in profile "functional-232602"
	I1218 00:29:09.410148 1305484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232602"
	I1218 00:29:09.410443 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.410469 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.418864 1305484 out.go:179] * Verifying Kubernetes components...
	I1218 00:29:09.421814 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:09.464044 1305484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 00:29:09.465759 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.465914 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.466265 1305484 addons.go:239] Setting addon default-storageclass=true in "functional-232602"
	I1218 00:29:09.466296 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.466740 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.466941 1305484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.466952 1305484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 00:29:09.466995 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.523535 1305484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:09.523562 1305484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 00:29:09.523638 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.539603 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.550039 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.631300 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:09.666484 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.687810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.394630 1305484 node_ready.go:35] waiting up to 6m0s for node "functional-232602" to be "Ready" ...
	I1218 00:29:10.394645 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.394905 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.394947 1305484 retry.go:31] will retry after 177.31527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.395055 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.395073 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395086 1305484 retry.go:31] will retry after 150.104012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395151 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.545905 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.572498 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.615825 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.615864 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.615882 1305484 retry.go:31] will retry after 386.236336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650773 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.650838 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650865 1305484 retry.go:31] will retry after 280.734601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.894991 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.895069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.932808 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.998277 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.998407 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.998429 1305484 retry.go:31] will retry after 660.849815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.003467 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.066495 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.066548 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.066567 1305484 retry.go:31] will retry after 792.514458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.395083 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.659960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:11.722453 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.722493 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.722511 1305484 retry.go:31] will retry after 472.801155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.859919 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.895517 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.895589 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.895884 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.931975 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.936172 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.936234 1305484 retry.go:31] will retry after 583.966469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.195539 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:12.255280 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.259094 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.259131 1305484 retry.go:31] will retry after 926.212833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.395399 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.395475 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.395812 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:12.395919 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:12.520996 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:12.581638 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.581728 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.581762 1305484 retry.go:31] will retry after 1.65494693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.895402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.186032 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:13.243730 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:13.248249 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.248281 1305484 retry.go:31] will retry after 1.192911742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.395563 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.395681 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.395976 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.895848 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.895954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.896330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:14.237854 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:14.298889 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.302600 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.302641 1305484 retry.go:31] will retry after 1.5263786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.395779 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.395871 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.396209 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:14.396293 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:14.441356 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:14.508115 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.508165 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.508184 1305484 retry.go:31] will retry after 3.305911776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.895890 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.896219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.394975 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.395415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.829900 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:15.892510 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:15.892556 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.892574 1305484 retry.go:31] will retry after 3.944012673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.895725 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.895798 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.896127 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.394873 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.394951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.395246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.894968 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.895399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:16.895481 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:17.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:17.814960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:17.873346 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:17.873415 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.873437 1305484 retry.go:31] will retry after 2.287204088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.895511 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.895833 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.395764 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.395845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.396148 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.895440 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:19.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.395328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:19.836815 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:19.891772 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895038 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.895109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.895501 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895520 1305484 retry.go:31] will retry after 2.272181462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.160871 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:20.233754 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:20.233805 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.233824 1305484 retry.go:31] will retry after 9.03130365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.395392 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.395710 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:20.894916 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.894992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:21.395041 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.395135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.395466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:21.395525 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:21.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.895012 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.168810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:22.226105 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:22.229620 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.229649 1305484 retry.go:31] will retry after 6.326012676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.895280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.395383 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.895360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:23.895414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:24.395042 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.395119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:24.895109 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.895188 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.395358 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.395437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.395700 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.895538 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.895612 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.895906 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:25.895954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:26.395465 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.395571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.395892 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:26.895653 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.895735 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.395741 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.395852 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.396210 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.895939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.896273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:27.896328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:28.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:28.556610 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:28.617128 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:28.617182 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.617202 1305484 retry.go:31] will retry after 6.797257953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.895668 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.895975 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.265354 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:29.327180 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:29.327227 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.327246 1305484 retry.go:31] will retry after 10.081474738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.395481 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.395821 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.895626 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.895701 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:30.395476 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.395558 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.395870 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:30.395928 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:30.895674 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.895771 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.896102 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.395677 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.395765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.396042 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.895800 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.895892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.896225 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:32.395871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.395946 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.396238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:32.396286 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:32.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.894971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.895221 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.894995 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.895096 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.895485 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:34.895540 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:35.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.395369 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:35.415065 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:35.470618 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:35.474707 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.474739 1305484 retry.go:31] will retry after 12.346765183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.894884 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.895217 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.895297 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:37.395715 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.395786 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.396036 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:37.396085 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:37.895882 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.895957 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.896282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.395072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.395404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.395085 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.395413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.409781 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:39.473091 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:39.473144 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.473164 1305484 retry.go:31] will retry after 18.475103934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.895826 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.896182 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:39.896239 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:40.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.394986 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:40.894982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.895057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.395197 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.395487 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.894953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:42.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.395341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:42.395398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:42.895053 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.394921 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.894994 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.895439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:44.395145 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:44.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:44.895223 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.895291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.395338 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.396157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.395277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:46.895417 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:47.395091 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.395170 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:47.821776 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:47.880326 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:47.883900 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.883932 1305484 retry.go:31] will retry after 18.240859758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.895204 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.895522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.895186 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.895530 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:48.895589 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:49.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:49.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.395307 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.395385 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.395702 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.895512 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.895597 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.895908 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:50.895965 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:51.395762 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.395833 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.396181 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:51.894896 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.895266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.395005 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.395321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:53.395068 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.395156 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.395497 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:53.395555 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:53.894871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.895228 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.395496 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.395573 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.895684 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.895759 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.896113 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.394869 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.394953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.395245 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.895404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:55.895459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:56.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.395302 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:56.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.895034 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.948848 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:58.011608 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:58.015264 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.015303 1305484 retry.go:31] will retry after 17.396243449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.394927 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.395242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:58.395294 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:58.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.395011 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.894993 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:00.395507 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.395593 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.395898 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:00.395950 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:00.894850 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.894938 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.394969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.395325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.895392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.395062 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.395142 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.395460 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:02.895401 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:03.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.395392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:03.894963 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.895041 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.895359 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.394991 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.894956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:05.395299 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.395380 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.395678 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:05.395727 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:05.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.125881 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:06.190863 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:06.190916 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.190936 1305484 retry.go:31] will retry after 24.931144034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.395236 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.395314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.395677 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.895467 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.895550 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.895878 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:07.395628 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.395697 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.395955 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:07.395997 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:07.895729 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.895808 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.896074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.395873 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.395948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.396313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.894868 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.895208 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.895287 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.895612 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:09.895672 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:10.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.395353 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.395606 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:10.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.394972 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.395053 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.894959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.895219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:12.394924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:12.395391 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:12.894984 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.894985 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.895388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.395015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.395307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.894872 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.894951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.895211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:14.895252 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:15.395260 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.395713 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:15.411948 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:15.467885 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:15.471996 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.472026 1305484 retry.go:31] will retry after 23.671964263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.895665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.895991 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.395769 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.395850 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.396115 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.894852 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.894935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.895261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:16.895324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:17.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.394932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:17.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.895123 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.895201 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.895524 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:18.895581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:19.395822 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.395905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.396165 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:19.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.395230 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.395313 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.395645 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.895295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:21.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:21.395450 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:21.895115 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.895196 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.895514 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.394905 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.895345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.395045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.895065 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:23.895477 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:24.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.394989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.395346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:24.895052 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.895137 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.895433 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.395347 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.395641 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.895358 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.895437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.895746 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:25.895805 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:26.395602 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.395686 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.396014 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:26.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.895844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.896146 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.394944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.894978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.895055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.895365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:28.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.395236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:28.395276 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:28.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.395540 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.395625 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.395953 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.894937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.895190 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:30.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.395255 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.395559 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:30.395614 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:30.895284 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.895370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.895692 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.123262 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:31.181409 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.184938 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.185056 1305484 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:31.395353 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.395427 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.395686 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.895522 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.895971 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:32.395780 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.395853 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.396133 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:32.396184 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:32.894842 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.894921 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.895187 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.394857 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.394937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.894971 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.895325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:34.895435 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:35.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.395839 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:35.895696 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.895778 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.896070 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.395851 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.395932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.396284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.895427 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:36.895485 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:37.395134 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.395209 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.395462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:37.894924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.895330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:39.144879 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:39.206506 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206561 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206652 1305484 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:39.209780 1305484 out.go:179] * Enabled addons: 
	I1218 00:30:39.213292 1305484 addons.go:530] duration metric: took 1m29.803748848s for enable addons: enabled=[]
	I1218 00:30:39.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:39.395343 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:39.895241 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.895315 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.895674 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.395346 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.395421 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.395699 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.895493 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.895927 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:41.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.395901 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.396304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:41.396363 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:41.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.895079 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.895335 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.394978 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.395300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:43.895429 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:44.395104 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.395180 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.395503 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:44.894907 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.894987 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.895277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.394949 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:46.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.395018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:46.395324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:46.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.895453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:48.394988 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.395066 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:48.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:48.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.895329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.394998 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:50.395234 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.395312 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.395669 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:50.395726 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:50.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.895541 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.895800 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.395565 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.395643 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.895820 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.896139 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:52.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.395866 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:52.396147 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:52.894845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.894930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.895239 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.895246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.395001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.895019 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.895132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.895462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:54.895517 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:55.395382 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.395459 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.395747 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:55.895567 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.896004 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.395794 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.395876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.396202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.894918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.895248 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:57.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:57.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:57.895089 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.895163 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.895506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.395467 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.895216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:59.895259 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:00.395510 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.395606 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.395915 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:00.895683 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.895763 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.896072 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.395863 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.395942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.396196 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.894969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:01.895364 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:02.395506 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.395587 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.395926 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:02.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.895787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.394835 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.394918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.395241 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:03.895409 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:04.394887 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.395203 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:04.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.895585 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.395452 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.395534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.895595 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.895675 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.895945 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:05.895986 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:06.395824 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.395899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.396242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:06.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.395035 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.395109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.894960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:08.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.395097 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.395422 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:08.395475 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:08.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.895185 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.895437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.394963 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.395425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.894995 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:10.395562 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:10.895006 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.895092 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.895441 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.395247 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.395326 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.395703 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.895773 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.895839 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:12.395833 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.395908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.396246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:12.396315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:12.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.894941 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.895004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.895326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.394884 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.395283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.894810 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.894876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.895171 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:14.895233 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:15.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.395266 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.395614 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:15.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.895319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.394906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.395230 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:16.895449 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:17.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.395260 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.395607 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:17.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.895160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.895445 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.895357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:19.395005 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:19.395376 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:19.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.395282 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.395364 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.395694 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.895475 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.895552 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.895809 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:21.395604 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.395678 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.395990 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:21.396041 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:21.895659 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.895733 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.896015 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.395655 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.395728 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.395992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.895435 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.895515 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.895848 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:23.395649 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.395732 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:23.396134 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:23.895883 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.895960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.896252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.894847 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.895271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.395154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.395412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.895068 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.895475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:25.895531 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:26.395075 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.395488 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:26.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.895250 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.395377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.895371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:28.395072 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:28.395459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:28.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.895034 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.395100 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.395520 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.894938 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:30.395237 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.395365 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.395704 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:30.395760 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:30.895519 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.895940 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.395676 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.395750 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.396048 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.895809 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.895895 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.896244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.394971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.894900 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:32.895326 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:33.394994 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.395070 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.395437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:33.895135 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.895535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.395882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.395954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.396208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:34.895368 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:35.395101 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:35.895173 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.895249 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.895577 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.394992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.395327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.895323 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:37.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.395252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:37.395302 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:37.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.895332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.895059 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.895134 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:39.394962 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.395049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.395388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:39.395443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:39.895187 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.895635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.395589 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.895352 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.395047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.895073 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.895149 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.895412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:41.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:42.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:42.895106 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.895183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.895531 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.394891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.895424 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:43.895479 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:44.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.395368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:44.895047 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.895117 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.895407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.395328 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.395422 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.395783 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.895608 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.895699 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.896131 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:45.896187 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:46.394880 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:46.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.895051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.395116 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.395191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.395557 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.894966 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.895047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:48.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:48.395424 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:48.895132 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.895327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:50.395224 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.395303 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:50.395707 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:50.895406 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.895483 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.395554 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.395639 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.395931 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.895695 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.895768 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:52.395729 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.395811 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.396079 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:52.396127 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:52.895894 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.895969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.896306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.395050 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.395150 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.895062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.895316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.395011 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.895320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:54.895366 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:55.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.395291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.395575 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:55.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.895409 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.394936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.895032 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.895105 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:56.895458 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:57.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:57.895074 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.895154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.895479 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.394862 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.395279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.894867 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.895307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:59.394852 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.394934 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:59.395339 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:59.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.895849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.896110 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.395197 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.395298 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.395737 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.895502 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.895905 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:01.395709 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.395787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.396061 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:01.396105 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:01.895861 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.895937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.896281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.894996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.895072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.895410 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:03.895469 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:04.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.395044 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.395298 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:04.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.395270 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.395588 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.895256 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.895330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.895578 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:05.895621 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:06.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:06.894980 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.895071 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.895448 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:08.394964 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.395043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.395361 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:08.395415 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:08.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.895046 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.895131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.895449 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:10.395311 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.395381 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.395635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:10.395676 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:10.895273 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.895354 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.895754 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.395292 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.395374 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.395675 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.895376 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.895441 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.895684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:12.395437 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.395517 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.395849 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:12.395904 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:12.895550 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.895627 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.895939 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.395711 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.395791 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.895885 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.895958 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.896301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.394930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.395206 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.894945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.895220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:14.895266 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:15.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.395306 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.395672 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:15.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.895349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.395013 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.895366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:16.895425 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:17.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.895274 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.895192 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:18.895550 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:19.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.395195 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:19.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.895024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.895370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.395299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.395647 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.894993 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.895294 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:21.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.395303 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:21.395350 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:21.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.895043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.394838 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.394910 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.395188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:23.395047 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.395131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.395465 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:23.395520 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:23.894892 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.894964 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.895362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:25.395258 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.395335 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.395602 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:25.395653 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:25.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.895416 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.395052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.895075 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.895415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.394948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.895356 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:27.895426 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:28.395097 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.395171 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.395489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:28.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.895036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.395111 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.395193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.895559 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.895634 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.895935 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:29.895990 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:30.395759 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.395836 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.396159 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:30.894851 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.894931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.395281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:32.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.395132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:32.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:32.894897 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.895317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.395060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.895211 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.895286 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.895620 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.394801 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.394869 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.395114 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.894830 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.894907 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.895223 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:34.895273 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:35.395130 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:35.895126 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.895205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:36.895398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:37.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.394969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.395292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:37.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.394915 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.394990 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.895094 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.895411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:38.895465 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:39.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:39.895143 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.895225 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.395286 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.395370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.395636 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:41.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:41.395439 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:41.894881 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.894976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.394999 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.395081 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.395442 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.895025 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.895106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.895432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.394888 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.394966 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.395216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:43.895348 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:44.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:44.894831 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.894908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.895175 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.395389 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.395497 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.395880 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.895561 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.895646 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.895997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:45.896056 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:46.395702 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.395785 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.396046 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:46.895863 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.895935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.896257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.395439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:48.395027 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:48.395498 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:48.895164 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.895243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.895582 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.395264 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.395597 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.895474 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.895557 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:50.395724 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.395800 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.396111 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:50.396169 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:50.895876 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.895947 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.896202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.395401 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.895200 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.895548 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.395025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:52.895410 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:53.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.395162 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:53.895715 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.895783 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.896041 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.395464 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.395544 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.395863 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.895501 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:54.895971 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:55.395850 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.395924 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.396188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:55.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.895296 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.395115 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.395513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.895193 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.895259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.895583 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:57.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.395358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:57.395413 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:57.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.395771 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.395843 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.396103 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.895868 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.895950 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.896279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.394988 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.395315 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.895060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:59.895473 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:00.395531 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.395633 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:00.894904 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.895313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.394991 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.395320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.895358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:02.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.395021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.395373 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:02.395430 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:02.895092 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.895164 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.395411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.895093 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.394889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.395259 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.894989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:04.895395 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:05.395163 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.395243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.395682 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:05.895450 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.895524 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.895784 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.395568 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.395656 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.395978 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.895794 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.895874 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.896211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:06.896271 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:07.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:07.894962 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.895397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.394973 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.395407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.895172 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.895469 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:09.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:09.395444 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:09.895137 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.895212 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.895526 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.395259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.395579 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.895391 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.895474 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.895867 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:11.395660 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.395744 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.396081 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:11.396140 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:11.895822 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.895896 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.896157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.394896 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.394973 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.395034 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.395107 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.895385 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:13.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:14.395141 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.395215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:14.895214 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.895295 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.895592 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.395316 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.395398 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.395758 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.895576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.895992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:15.896047 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:16.395754 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.396096 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:16.895867 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.895943 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.896286 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.395428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.895235 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:18.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.395037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:18.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:18.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.395272 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.895438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:20.395201 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.395308 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.395646 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:20.395698 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:20.895422 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.895490 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.395521 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.395598 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.395947 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.895610 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.895689 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.896027 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:22.395778 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.395849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.396108 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:22.396151 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:22.894879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.894954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.895254 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.895018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.395023 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.395432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:24.895433 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:25.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:25.895136 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.895539 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.395250 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.395706 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.895534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.895793 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:26.895834 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:27.395582 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.395665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.396005 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:27.895686 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.895765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.896121 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.395755 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.396080 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.895931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.896264 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:28.896319 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:29.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.395342 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:29.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.895400 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.395313 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.395390 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.395741 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.895528 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.895610 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.895946 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:31.395576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.395644 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.395889 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:31.395930 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:31.895675 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.895753 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.896082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.394834 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.894964 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.895091 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.895177 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:33.895563 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:34.394882 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.394955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:34.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.395153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.894873 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.895257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:36.394950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.395348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:36.395402 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:36.895071 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.895476 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.395268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.395002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.895305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:38.895353 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:39.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:39.895212 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.895299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.895609 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.395293 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.395361 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.395613 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.895328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:40.895383 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:41.395069 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.395147 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.395453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:41.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.895138 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.895215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.895542 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:42.895601 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:43.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:43.895604 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.895677 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.395290 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.395367 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.395718 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.895507 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.895582 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.895842 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:44.895892 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:45.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:45.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.395070 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.395160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.395494 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.894943 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.895019 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:47.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.395069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.395419 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:47.395483 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:47.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.894965 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.895236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.394934 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.395366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.895481 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:49.395814 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.395888 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.396152 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:49.396201 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:49.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.395242 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.395323 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.395662 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.894942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.895212 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.895127 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.895213 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.895688 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:51.895762 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:52.395524 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.395609 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.395929 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:52.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.895845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.896160 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.395295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.894861 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:54.394811 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.394887 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.395224 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:54.395284 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:54.895871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.895944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.896276 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.395236 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.895000 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.895285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:56.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:56.395441 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:56.895820 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.895899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.896155 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.894987 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.895413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:58.395076 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.395146 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.395477 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:58.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:58.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.395049 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.395125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.894984 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:00.395314 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.395415 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.395786 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:00.395854 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:00.895591 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.895666 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.896029 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.395664 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.395997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.895814 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.895904 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.896249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.394968 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.395421 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.895193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.895464 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:02.895507 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:03.395162 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.395245 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.395584 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:03.895306 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.895387 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.395125 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.395233 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.395547 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.895240 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.895314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.895659 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:04.895713 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:05.395523 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.395602 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:05.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.895784 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.896083 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.395846 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.395920 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.396255 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.894862 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:07.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.395319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:07.395361 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:07.895013 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.895141 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.895473 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.395190 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.395601 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.895088 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.895159 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:09.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.395397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:09.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:09.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.395240 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.395490 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.895174 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.895254 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:11.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.395429 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:11.395490 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:11.895021 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.895089 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.395645 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.395720 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.396082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.895753 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.895830 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.896143 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.394854 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.394925 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.395193 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.895010 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.895299 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:13.895347 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:14.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:14.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.895129 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.395317 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.395394 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.395684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.895487 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.895571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.895903 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:15.895957 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:16.395670 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.395998 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:16.895851 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.895945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.896285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.395074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.895249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:18.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.395317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:18.395371 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:18.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.895376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.394872 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.895389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:20.395179 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.395604 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:20.395662 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:20.894898 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.895244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.395016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.894952 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.394923 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.395310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:22.895406 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:23.395099 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.395522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:23.895196 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.895267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.394919 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.394997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.395328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.895049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:24.895443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:25.395131 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.395205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.395456 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:25.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.895301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:27.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.395004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:27.395386 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:27.895785 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.895857 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.896201 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.394885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.395288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:29.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.395527 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:29.395588 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:29.894812 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.894881 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.895140 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.395146 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.395230 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.395562 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.894965 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.895039 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.895125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.895444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:31.895519 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:32.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:32.895139 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.895468 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.394926 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.895321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:34.394900 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.394970 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.395227 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:34.395268 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:34.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.395150 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.395242 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.395581 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.895262 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.895333 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.895655 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:36.395446 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.395526 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.395891 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:36.395954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:36.895879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.896025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.896489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.395256 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.395590 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.395094 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.395175 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:38.895318 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:39.394981 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:39.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.395255 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.395330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.395611 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.895417 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.895495 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.895856 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:40.895911 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:41.395671 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.395749 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.396075 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:41.895770 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.895842 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.394861 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.394945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:43.394987 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.395349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:43.395397 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:43.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.895331 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.395167 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.395534 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.895001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:45.395381 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.395465 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.395835 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:45.395899 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:45.895622 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.895696 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.896010 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.395697 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.395815 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.396068 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.895828 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.895903 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.896238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.394829 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.394914 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.395208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.894909 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.895256 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:47.895315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:48.394935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.395013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.895191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.395252 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.395319 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.395570 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.895468 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.895542 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.895868 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:49.895924 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:50.395784 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.395860 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.396189 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:50.895823 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.895905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.896170 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.394877 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.394954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.395305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.895290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:52.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.394961 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.395282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:52.395333 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:52.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.895119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.895493 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.395297 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.395619 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.894885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.894963 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.895214 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:54.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.395306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:54.395365 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:54.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.395135 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.395210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:56.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.395029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:56.395422 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:56.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.895133 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.895056 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.895135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.895491 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:58.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.395253 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.395564 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:58.395616 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:58.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.895017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.395042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.894955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.895253 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:00.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.395351 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:00.395696 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:00.895585 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.895660 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.895999 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.395773 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.395844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.396106 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.895887 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.895974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.896290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.394993 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.395076 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.395438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.895141 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.895226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.895545 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:02.895597 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.395370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:03.895085 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.895169 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.895513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.395827 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.395892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.396191 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:05.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.395239 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:05.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:05.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.895226 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.395376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.395122 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.395495 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:07.895403 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:08.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:08.895048 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.895123 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.895471 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.395187 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.395657 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.895568 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.895676 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.896021 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:09.896082 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:10.395155 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:10.395216 1305484 node_ready.go:38] duration metric: took 6m0.000503053s for node "functional-232602" to be "Ready" ...
	I1218 00:35:10.402744 1305484 out.go:203] 
	W1218 00:35:10.405748 1305484 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 00:35:10.405971 1305484 out.go:285] * 
	W1218 00:35:10.408384 1305484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:35:10.411337 1305484 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:35:17 functional-232602 containerd[5205]: time="2025-12-18T00:35:17.705232515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.764609613Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.767396570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.774291751Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.774651350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.748841940Z" level=info msg="No images store for sha256:d3f166a94538771772f2aeda8faeb235ac972e7b336df4992d5412071ea6ea51"
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.751193535Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-232602\""
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.758320825Z" level=info msg="ImageCreate event name:\"sha256:6d75aca4bf4907371f012e5f07ac5372de8ce8437f37e9634c438c36e1e883ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.758874716Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.591512381Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.594015209Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.596048115Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.608020639Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.562936969Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.565290492Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.568554895Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.575271348Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.716013099Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.718216414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.727189899Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.727523709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.890060584Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.892826454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.899882107Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.900672596Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:35:23.699341    9177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:23.699753    9177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:23.701398    9177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:23.702037    9177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:23.703686    9177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:35:23 up  7:17,  0 user,  load average: 0.32, 0.28, 0.65
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:35:20 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:20 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 18 00:35:20 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:20 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:20 functional-232602 kubelet[8943]: E1218 00:35:20.961910    8943 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:20 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:20 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:21 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 18 00:35:21 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:21 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:21 functional-232602 kubelet[9028]: E1218 00:35:21.695985    9028 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:21 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:21 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:22 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 18 00:35:22 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:22 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:22 functional-232602 kubelet[9075]: E1218 00:35:22.447693    9075 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:22 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:22 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 18 00:35:23 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:23 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:23 functional-232602 kubelet[9096]: E1218 00:35:23.213947    9096 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (352.541252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-232602 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-232602 get pods: exit status 1 (130.847467ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-232602 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (315.06068ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-739047 image ls --format short --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh     │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image   │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete  │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start   │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ start   │ -p functional-232602 --alsologtostderr -v=8                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add minikube-local-cache-test:functional-232602                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache delete minikube-local-cache-test:functional-232602                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl images                                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-232602 cache reload                                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ kubectl │ functional-232602 kubectl -- --context functional-232602 get pods                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:29:05
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:29:05.243654 1305484 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:29:05.243837 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.243867 1305484 out.go:374] Setting ErrFile to fd 2...
	I1218 00:29:05.243888 1305484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:29:05.244277 1305484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:29:05.244868 1305484 out.go:368] Setting JSON to false
	I1218 00:29:05.245808 1305484 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25892,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:29:05.245939 1305484 start.go:143] virtualization:  
	I1218 00:29:05.249423 1305484 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:29:05.253059 1305484 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:29:05.253187 1305484 notify.go:221] Checking for updates...
	I1218 00:29:05.259241 1305484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:29:05.262171 1305484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:05.265173 1305484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:29:05.268135 1305484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:29:05.270950 1305484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:29:05.274293 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:05.274440 1305484 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:29:05.308275 1305484 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:29:05.308407 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.375725 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.366230286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.375834 1305484 docker.go:319] overlay module found
	I1218 00:29:05.378939 1305484 out.go:179] * Using the docker driver based on existing profile
	I1218 00:29:05.381619 1305484 start.go:309] selected driver: docker
	I1218 00:29:05.381657 1305484 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.381752 1305484 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:29:05.381892 1305484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:29:05.440724 1305484 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:29:05.431205912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:29:05.441147 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:05.441215 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:05.441270 1305484 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:05.444475 1305484 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:29:05.447488 1305484 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:29:05.450519 1305484 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:29:05.453580 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:05.453631 1305484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:29:05.453641 1305484 cache.go:65] Caching tarball of preloaded images
	I1218 00:29:05.453681 1305484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:29:05.453745 1305484 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:29:05.453756 1305484 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:29:05.453862 1305484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:29:05.474116 1305484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:29:05.474140 1305484 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:29:05.474160 1305484 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:29:05.474205 1305484 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:29:05.474271 1305484 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "functional-232602"
	I1218 00:29:05.474294 1305484 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:29:05.474305 1305484 fix.go:54] fixHost starting: 
	I1218 00:29:05.474585 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:05.494473 1305484 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:29:05.494511 1305484 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:29:05.497625 1305484 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:29:05.497657 1305484 machine.go:94] provisionDockerMachine start ...
	I1218 00:29:05.497756 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.514682 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.515020 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.515044 1305484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:29:05.668376 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.668400 1305484 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:29:05.668465 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.700140 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.700482 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.700495 1305484 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:29:05.865944 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:29:05.866034 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:05.884487 1305484 main.go:143] libmachine: Using SSH client type: native
	I1218 00:29:05.884983 1305484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:29:05.885010 1305484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:29:06.041516 1305484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:29:06.041541 1305484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:29:06.041561 1305484 ubuntu.go:190] setting up certificates
	I1218 00:29:06.041572 1305484 provision.go:84] configureAuth start
	I1218 00:29:06.041652 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.060898 1305484 provision.go:143] copyHostCerts
	I1218 00:29:06.060951 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.060994 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:29:06.061002 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:29:06.061080 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:29:06.061163 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061182 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:29:06.061187 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:29:06.061215 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:29:06.061256 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061273 1305484 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:29:06.061277 1305484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:29:06.061301 1305484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:29:06.061349 1305484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:29:06.177802 1305484 provision.go:177] copyRemoteCerts
	I1218 00:29:06.177898 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:29:06.177967 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.195440 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.308765 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 00:29:06.308835 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:29:06.326972 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 00:29:06.327095 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:29:06.345137 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 00:29:06.345225 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:29:06.363588 1305484 provision.go:87] duration metric: took 321.991809ms to configureAuth
	I1218 00:29:06.363617 1305484 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:29:06.363812 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:06.363826 1305484 machine.go:97] duration metric: took 866.163062ms to provisionDockerMachine
	I1218 00:29:06.363833 1305484 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:29:06.363845 1305484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:29:06.363904 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:29:06.363949 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.381445 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.493044 1305484 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:29:06.496574 1305484 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1218 00:29:06.496595 1305484 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1218 00:29:06.496599 1305484 command_runner.go:130] > VERSION_ID="12"
	I1218 00:29:06.496604 1305484 command_runner.go:130] > VERSION="12 (bookworm)"
	I1218 00:29:06.496612 1305484 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1218 00:29:06.496615 1305484 command_runner.go:130] > ID=debian
	I1218 00:29:06.496641 1305484 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1218 00:29:06.496649 1305484 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1218 00:29:06.496655 1305484 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1218 00:29:06.496744 1305484 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:29:06.496762 1305484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:29:06.496773 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:29:06.496837 1305484 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:29:06.496920 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:29:06.496932 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /etc/ssl/certs/12611482.pem
	I1218 00:29:06.497013 1305484 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:29:06.497022 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> /etc/test/nested/copy/1261148/hosts
	I1218 00:29:06.497083 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:29:06.504772 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:06.523736 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:29:06.542759 1305484 start.go:296] duration metric: took 178.908993ms for postStartSetup
	I1218 00:29:06.542856 1305484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:29:06.542901 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.560753 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.665778 1305484 command_runner.go:130] > 18%
	I1218 00:29:06.665854 1305484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:29:06.671095 1305484 command_runner.go:130] > 160G
	I1218 00:29:06.671651 1305484 fix.go:56] duration metric: took 1.19734099s for fixHost
	I1218 00:29:06.671671 1305484 start.go:83] releasing machines lock for "functional-232602", held for 1.197387766s
	I1218 00:29:06.671738 1305484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:29:06.688941 1305484 ssh_runner.go:195] Run: cat /version.json
	I1218 00:29:06.689003 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.689377 1305484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:29:06.689435 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:06.710307 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.721003 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:06.812429 1305484 command_runner.go:130] > {"iso_version": "v1.37.0-1765846775-22141", "kicbase_version": "v0.0.48-1765966054-22186", "minikube_version": "v1.37.0", "commit": "c344550999bcbb78f38b2df057224788bb2d30b2"}
	I1218 00:29:06.812585 1305484 ssh_runner.go:195] Run: systemctl --version
	I1218 00:29:06.910410 1305484 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 00:29:06.913301 1305484 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1218 00:29:06.913347 1305484 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1218 00:29:06.913421 1305484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 00:29:06.917811 1305484 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 00:29:06.917849 1305484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:29:06.917931 1305484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:29:06.925837 1305484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:29:06.925861 1305484 start.go:496] detecting cgroup driver to use...
	I1218 00:29:06.925891 1305484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:29:06.925936 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:29:06.941416 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:29:06.954870 1305484 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:29:06.954953 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:29:06.971407 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:29:06.985680 1305484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:29:07.097075 1305484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:29:07.240817 1305484 docker.go:234] disabling docker service ...
	I1218 00:29:07.240965 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:29:07.256804 1305484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:29:07.271026 1305484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:29:07.407005 1305484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:29:07.534286 1305484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:29:07.548592 1305484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:29:07.562819 1305484 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 00:29:07.564071 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:29:07.574541 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:29:07.583515 1305484 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:29:07.583615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:29:07.592330 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.601414 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:29:07.610399 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:29:07.619445 1305484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:29:07.627615 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:29:07.637099 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:29:07.646771 1305484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:29:07.656000 1305484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:29:07.663026 1305484 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 00:29:07.664029 1305484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:29:07.671707 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:07.789368 1305484 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:29:07.948156 1305484 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:29:07.948230 1305484 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:29:07.952108 1305484 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1218 00:29:07.952130 1305484 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 00:29:07.952136 1305484 command_runner.go:130] > Device: 0,72	Inode: 1611        Links: 1
	I1218 00:29:07.952144 1305484 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:07.952150 1305484 command_runner.go:130] > Access: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952154 1305484 command_runner.go:130] > Modify: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952160 1305484 command_runner.go:130] > Change: 2025-12-18 00:29:07.884459761 +0000
	I1218 00:29:07.952164 1305484 command_runner.go:130] >  Birth: -
	I1218 00:29:07.952461 1305484 start.go:564] Will wait 60s for crictl version
	I1218 00:29:07.952520 1305484 ssh_runner.go:195] Run: which crictl
	I1218 00:29:07.958389 1305484 command_runner.go:130] > /usr/local/bin/crictl
	I1218 00:29:07.959041 1305484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:29:07.980682 1305484 command_runner.go:130] > Version:  0.1.0
	I1218 00:29:07.980702 1305484 command_runner.go:130] > RuntimeName:  containerd
	I1218 00:29:07.980709 1305484 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1218 00:29:07.980714 1305484 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 00:29:07.982988 1305484 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:29:07.983059 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.002890 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.002977 1305484 ssh_runner.go:195] Run: containerd --version
	I1218 00:29:08.027238 1305484 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1218 00:29:08.034949 1305484 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:29:08.037919 1305484 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:29:08.055210 1305484 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:29:08.059294 1305484 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1218 00:29:08.059421 1305484 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:29:08.059535 1305484 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:29:08.059617 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.084496 1305484 command_runner.go:130] > {
	I1218 00:29:08.084519 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.084525 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084534 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.084540 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084546 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.084550 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084554 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084566 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.084574 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084578 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.084582 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084589 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084593 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084596 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084609 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.084616 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084642 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.084646 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084651 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084659 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.084666 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084671 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.084678 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084682 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084686 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084689 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084696 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.084705 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084716 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.084722 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084731 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084739 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.084751 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084756 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.084760 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.084764 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084768 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084777 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084786 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.084791 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084802 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.084805 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084810 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084818 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.084824 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084829 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.084835 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084839 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084851 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084855 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084860 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084863 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084868 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084876 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.084883 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084888 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.084892 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084896 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.084905 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.084917 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.084922 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.084929 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.084943 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.084946 1305484 command_runner.go:130] >       },
	I1218 00:29:08.084957 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.084961 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.084965 1305484 command_runner.go:130] >     },
	I1218 00:29:08.084968 1305484 command_runner.go:130] >     {
	I1218 00:29:08.084975 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.084983 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.084991 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.084998 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085003 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085019 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.085026 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085033 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.085037 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085041 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085044 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085050 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085054 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085057 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085060 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085067 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.085073 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085078 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.085084 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085088 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085106 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.085110 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085114 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.085124 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085128 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085132 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085138 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085148 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.085153 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085160 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.085166 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085170 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085182 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.085191 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085195 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.085199 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085203 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.085206 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085224 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085228 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.085231 1305484 command_runner.go:130] >     },
	I1218 00:29:08.085235 1305484 command_runner.go:130] >     {
	I1218 00:29:08.085244 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.085252 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.085258 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.085264 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085270 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.085278 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.085287 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.085291 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.085296 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.085300 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.085306 1305484 command_runner.go:130] >       },
	I1218 00:29:08.085313 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.085317 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.085320 1305484 command_runner.go:130] >     }
	I1218 00:29:08.085323 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.085325 1305484 command_runner.go:130] > }
	I1218 00:29:08.087939 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.087964 1305484 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:29:08.088036 1305484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:29:08.111236 1305484 command_runner.go:130] > {
	I1218 00:29:08.111264 1305484 command_runner.go:130] >   "images":  [
	I1218 00:29:08.111269 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111279 1305484 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1218 00:29:08.111286 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111295 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1218 00:29:08.111298 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111302 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111311 1305484 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1218 00:29:08.111318 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111322 1305484 command_runner.go:130] >       "size":  "40636774",
	I1218 00:29:08.111330 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111334 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111337 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111340 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111347 1305484 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 00:29:08.111352 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111358 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 00:29:08.111364 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111368 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111379 1305484 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 00:29:08.111391 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111396 1305484 command_runner.go:130] >       "size":  "8034419",
	I1218 00:29:08.111400 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111404 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111407 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111410 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111417 1305484 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1218 00:29:08.111421 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111426 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1218 00:29:08.111429 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111437 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111447 1305484 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1218 00:29:08.111454 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111462 1305484 command_runner.go:130] >       "size":  "21168808",
	I1218 00:29:08.111467 1305484 command_runner.go:130] >       "username":  "nonroot",
	I1218 00:29:08.111475 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111478 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111483 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111491 1305484 command_runner.go:130] >       "id":  "sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57",
	I1218 00:29:08.111499 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111504 1305484 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.6-0"
	I1218 00:29:08.111507 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111511 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111519 1305484 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"
	I1218 00:29:08.111522 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111527 1305484 command_runner.go:130] >       "size":  "21749640",
	I1218 00:29:08.111533 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111537 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111543 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111547 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111559 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111562 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111565 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111573 1305484 command_runner.go:130] >       "id":  "sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54",
	I1218 00:29:08.111580 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111585 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-rc.1"
	I1218 00:29:08.111588 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111592 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111600 1305484 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"
	I1218 00:29:08.111606 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111611 1305484 command_runner.go:130] >       "size":  "24692223",
	I1218 00:29:08.111617 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111626 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111632 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111635 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111639 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111646 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111652 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111659 1305484 command_runner.go:130] >       "id":  "sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a",
	I1218 00:29:08.111662 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111668 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"
	I1218 00:29:08.111671 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111676 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111690 1305484 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"
	I1218 00:29:08.111697 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111701 1305484 command_runner.go:130] >       "size":  "20672157",
	I1218 00:29:08.111707 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111711 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111716 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111720 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111739 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111742 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111746 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111755 1305484 command_runner.go:130] >       "id":  "sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e",
	I1218 00:29:08.111759 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111768 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-rc.1"
	I1218 00:29:08.111771 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111775 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111785 1305484 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"
	I1218 00:29:08.111798 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111802 1305484 command_runner.go:130] >       "size":  "22432301",
	I1218 00:29:08.111805 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111809 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111813 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111816 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111825 1305484 command_runner.go:130] >       "id":  "sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde",
	I1218 00:29:08.111835 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111840 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-rc.1"
	I1218 00:29:08.111843 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111855 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111866 1305484 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"
	I1218 00:29:08.111872 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111876 1305484 command_runner.go:130] >       "size":  "15405535",
	I1218 00:29:08.111880 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111884 1305484 command_runner.go:130] >         "value":  "0"
	I1218 00:29:08.111889 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111893 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111899 1305484 command_runner.go:130] >       "pinned":  false
	I1218 00:29:08.111903 1305484 command_runner.go:130] >     },
	I1218 00:29:08.111913 1305484 command_runner.go:130] >     {
	I1218 00:29:08.111921 1305484 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1218 00:29:08.111925 1305484 command_runner.go:130] >       "repoTags":  [
	I1218 00:29:08.111929 1305484 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1218 00:29:08.111933 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111937 1305484 command_runner.go:130] >       "repoDigests":  [
	I1218 00:29:08.111947 1305484 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1218 00:29:08.111959 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.111963 1305484 command_runner.go:130] >       "size":  "267939",
	I1218 00:29:08.111967 1305484 command_runner.go:130] >       "uid":  {
	I1218 00:29:08.111971 1305484 command_runner.go:130] >         "value":  "65535"
	I1218 00:29:08.111978 1305484 command_runner.go:130] >       },
	I1218 00:29:08.111982 1305484 command_runner.go:130] >       "username":  "",
	I1218 00:29:08.111989 1305484 command_runner.go:130] >       "pinned":  true
	I1218 00:29:08.111992 1305484 command_runner.go:130] >     }
	I1218 00:29:08.112001 1305484 command_runner.go:130] >   ]
	I1218 00:29:08.112004 1305484 command_runner.go:130] > }
	I1218 00:29:08.114369 1305484 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:29:08.114392 1305484 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:29:08.114401 1305484 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:29:08.114566 1305484 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:29:08.114639 1305484 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:29:08.137373 1305484 command_runner.go:130] > {
	I1218 00:29:08.137395 1305484 command_runner.go:130] >   "cniconfig": {
	I1218 00:29:08.137400 1305484 command_runner.go:130] >     "Networks": [
	I1218 00:29:08.137405 1305484 command_runner.go:130] >       {
	I1218 00:29:08.137411 1305484 command_runner.go:130] >         "Config": {
	I1218 00:29:08.137420 1305484 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1218 00:29:08.137425 1305484 command_runner.go:130] >           "Name": "cni-loopback",
	I1218 00:29:08.137430 1305484 command_runner.go:130] >           "Plugins": [
	I1218 00:29:08.137433 1305484 command_runner.go:130] >             {
	I1218 00:29:08.137438 1305484 command_runner.go:130] >               "Network": {
	I1218 00:29:08.137442 1305484 command_runner.go:130] >                 "ipam": {},
	I1218 00:29:08.137452 1305484 command_runner.go:130] >                 "type": "loopback"
	I1218 00:29:08.137456 1305484 command_runner.go:130] >               },
	I1218 00:29:08.137463 1305484 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1218 00:29:08.137467 1305484 command_runner.go:130] >             }
	I1218 00:29:08.137470 1305484 command_runner.go:130] >           ],
	I1218 00:29:08.137483 1305484 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1218 00:29:08.137489 1305484 command_runner.go:130] >         },
	I1218 00:29:08.137494 1305484 command_runner.go:130] >         "IFName": "lo"
	I1218 00:29:08.137498 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137503 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137508 1305484 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1218 00:29:08.137515 1305484 command_runner.go:130] >     "PluginDirs": [
	I1218 00:29:08.137519 1305484 command_runner.go:130] >       "/opt/cni/bin"
	I1218 00:29:08.137522 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137526 1305484 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1218 00:29:08.137529 1305484 command_runner.go:130] >     "Prefix": "eth"
	I1218 00:29:08.137533 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137536 1305484 command_runner.go:130] >   "config": {
	I1218 00:29:08.137540 1305484 command_runner.go:130] >     "cdiSpecDirs": [
	I1218 00:29:08.137544 1305484 command_runner.go:130] >       "/etc/cdi",
	I1218 00:29:08.137554 1305484 command_runner.go:130] >       "/var/run/cdi"
	I1218 00:29:08.137569 1305484 command_runner.go:130] >     ],
	I1218 00:29:08.137573 1305484 command_runner.go:130] >     "cni": {
	I1218 00:29:08.137576 1305484 command_runner.go:130] >       "binDir": "",
	I1218 00:29:08.137580 1305484 command_runner.go:130] >       "binDirs": [
	I1218 00:29:08.137584 1305484 command_runner.go:130] >         "/opt/cni/bin"
	I1218 00:29:08.137587 1305484 command_runner.go:130] >       ],
	I1218 00:29:08.137591 1305484 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1218 00:29:08.137595 1305484 command_runner.go:130] >       "confTemplate": "",
	I1218 00:29:08.137598 1305484 command_runner.go:130] >       "ipPref": "",
	I1218 00:29:08.137602 1305484 command_runner.go:130] >       "maxConfNum": 1,
	I1218 00:29:08.137606 1305484 command_runner.go:130] >       "setupSerially": false,
	I1218 00:29:08.137610 1305484 command_runner.go:130] >       "useInternalLoopback": false
	I1218 00:29:08.137613 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137620 1305484 command_runner.go:130] >     "containerd": {
	I1218 00:29:08.137627 1305484 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1218 00:29:08.137632 1305484 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1218 00:29:08.137639 1305484 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1218 00:29:08.137645 1305484 command_runner.go:130] >       "runtimes": {
	I1218 00:29:08.137648 1305484 command_runner.go:130] >         "runc": {
	I1218 00:29:08.137654 1305484 command_runner.go:130] >           "ContainerAnnotations": null,
	I1218 00:29:08.137665 1305484 command_runner.go:130] >           "PodAnnotations": null,
	I1218 00:29:08.137670 1305484 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1218 00:29:08.137674 1305484 command_runner.go:130] >           "cgroupWritable": false,
	I1218 00:29:08.137679 1305484 command_runner.go:130] >           "cniConfDir": "",
	I1218 00:29:08.137685 1305484 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1218 00:29:08.137689 1305484 command_runner.go:130] >           "io_type": "",
	I1218 00:29:08.137695 1305484 command_runner.go:130] >           "options": {
	I1218 00:29:08.137699 1305484 command_runner.go:130] >             "BinaryName": "",
	I1218 00:29:08.137703 1305484 command_runner.go:130] >             "CriuImagePath": "",
	I1218 00:29:08.137707 1305484 command_runner.go:130] >             "CriuWorkPath": "",
	I1218 00:29:08.137710 1305484 command_runner.go:130] >             "IoGid": 0,
	I1218 00:29:08.137715 1305484 command_runner.go:130] >             "IoUid": 0,
	I1218 00:29:08.137726 1305484 command_runner.go:130] >             "NoNewKeyring": false,
	I1218 00:29:08.137734 1305484 command_runner.go:130] >             "Root": "",
	I1218 00:29:08.137738 1305484 command_runner.go:130] >             "ShimCgroup": "",
	I1218 00:29:08.137742 1305484 command_runner.go:130] >             "SystemdCgroup": false
	I1218 00:29:08.137746 1305484 command_runner.go:130] >           },
	I1218 00:29:08.137752 1305484 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1218 00:29:08.137761 1305484 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1218 00:29:08.137764 1305484 command_runner.go:130] >           "runtimePath": "",
	I1218 00:29:08.137770 1305484 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1218 00:29:08.137780 1305484 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1218 00:29:08.137784 1305484 command_runner.go:130] >           "snapshotter": ""
	I1218 00:29:08.137787 1305484 command_runner.go:130] >         }
	I1218 00:29:08.137790 1305484 command_runner.go:130] >       }
	I1218 00:29:08.137794 1305484 command_runner.go:130] >     },
	I1218 00:29:08.137804 1305484 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1218 00:29:08.137817 1305484 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1218 00:29:08.137822 1305484 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1218 00:29:08.137828 1305484 command_runner.go:130] >     "disableApparmor": false,
	I1218 00:29:08.137835 1305484 command_runner.go:130] >     "disableHugetlbController": true,
	I1218 00:29:08.137840 1305484 command_runner.go:130] >     "disableProcMount": false,
	I1218 00:29:08.137844 1305484 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1218 00:29:08.137853 1305484 command_runner.go:130] >     "enableCDI": true,
	I1218 00:29:08.137857 1305484 command_runner.go:130] >     "enableSelinux": false,
	I1218 00:29:08.137862 1305484 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1218 00:29:08.137866 1305484 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1218 00:29:08.137871 1305484 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1218 00:29:08.137878 1305484 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1218 00:29:08.137882 1305484 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1218 00:29:08.137887 1305484 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1218 00:29:08.137894 1305484 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1218 00:29:08.137901 1305484 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137906 1305484 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1218 00:29:08.137921 1305484 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1218 00:29:08.137929 1305484 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1218 00:29:08.137940 1305484 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1218 00:29:08.137943 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137947 1305484 command_runner.go:130] >   "features": {
	I1218 00:29:08.137952 1305484 command_runner.go:130] >     "supplemental_groups_policy": true
	I1218 00:29:08.137955 1305484 command_runner.go:130] >   },
	I1218 00:29:08.137962 1305484 command_runner.go:130] >   "golang": "go1.24.9",
	I1218 00:29:08.137972 1305484 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137984 1305484 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1218 00:29:08.137998 1305484 command_runner.go:130] >   "runtimeHandlers": [
	I1218 00:29:08.138001 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138005 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138009 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138019 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138022 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138025 1305484 command_runner.go:130] >     },
	I1218 00:29:08.138028 1305484 command_runner.go:130] >     {
	I1218 00:29:08.138043 1305484 command_runner.go:130] >       "features": {
	I1218 00:29:08.138048 1305484 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1218 00:29:08.138053 1305484 command_runner.go:130] >         "user_namespaces": true
	I1218 00:29:08.138056 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138060 1305484 command_runner.go:130] >       "name": "runc"
	I1218 00:29:08.138065 1305484 command_runner.go:130] >     }
	I1218 00:29:08.138069 1305484 command_runner.go:130] >   ],
	I1218 00:29:08.138074 1305484 command_runner.go:130] >   "status": {
	I1218 00:29:08.138078 1305484 command_runner.go:130] >     "conditions": [
	I1218 00:29:08.138089 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138093 1305484 command_runner.go:130] >         "message": "",
	I1218 00:29:08.138097 1305484 command_runner.go:130] >         "reason": "",
	I1218 00:29:08.138101 1305484 command_runner.go:130] >         "status": true,
	I1218 00:29:08.138112 1305484 command_runner.go:130] >         "type": "RuntimeReady"
	I1218 00:29:08.138115 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138118 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138128 1305484 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1218 00:29:08.138137 1305484 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1218 00:29:08.138140 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138147 1305484 command_runner.go:130] >         "type": "NetworkReady"
	I1218 00:29:08.138150 1305484 command_runner.go:130] >       },
	I1218 00:29:08.138155 1305484 command_runner.go:130] >       {
	I1218 00:29:08.138178 1305484 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1218 00:29:08.138187 1305484 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1218 00:29:08.138192 1305484 command_runner.go:130] >         "status": false,
	I1218 00:29:08.138197 1305484 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1218 00:29:08.138203 1305484 command_runner.go:130] >       }
	I1218 00:29:08.138206 1305484 command_runner.go:130] >     ]
	I1218 00:29:08.138209 1305484 command_runner.go:130] >   }
	I1218 00:29:08.138212 1305484 command_runner.go:130] > }
	I1218 00:29:08.140863 1305484 cni.go:84] Creating CNI manager for ""
	I1218 00:29:08.140888 1305484 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:29:08.140910 1305484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:29:08.140937 1305484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:29:08.141052 1305484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:29:08.141124 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:29:08.148733 1305484 command_runner.go:130] > kubeadm
	I1218 00:29:08.148755 1305484 command_runner.go:130] > kubectl
	I1218 00:29:08.148759 1305484 command_runner.go:130] > kubelet
	I1218 00:29:08.149813 1305484 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:29:08.149929 1305484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:29:08.157899 1305484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:29:08.171631 1305484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:29:08.185534 1305484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 00:29:08.199213 1305484 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:29:08.203261 1305484 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1218 00:29:08.203343 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:08.317482 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:08.643734 1305484 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:29:08.643804 1305484 certs.go:195] generating shared ca certs ...
	I1218 00:29:08.643833 1305484 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:08.644029 1305484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:29:08.644119 1305484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:29:08.644145 1305484 certs.go:257] generating profile certs ...
	I1218 00:29:08.644307 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:29:08.644441 1305484 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:29:08.644531 1305484 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:29:08.644560 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 00:29:08.644603 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 00:29:08.644662 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 00:29:08.644693 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 00:29:08.644737 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 00:29:08.644768 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 00:29:08.644809 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 00:29:08.644841 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 00:29:08.644932 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:29:08.645003 1305484 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:29:08.645041 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:29:08.645094 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:29:08.645151 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:29:08.645217 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:29:08.645309 1305484 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:29:08.645380 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.645420 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.645463 1305484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem -> /usr/share/ca-certificates/1261148.pem
	I1218 00:29:08.646318 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:29:08.666060 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:29:08.685232 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:29:08.704134 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:29:08.723554 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:29:08.741698 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:29:08.759300 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:29:08.777293 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:29:08.794355 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:29:08.812054 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:29:08.830087 1305484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:29:08.847372 1305484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:29:08.860094 1305484 ssh_runner.go:195] Run: openssl version
	I1218 00:29:08.866090 1305484 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1218 00:29:08.866507 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.874034 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:29:08.881757 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885459 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885707 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.885773 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:29:08.926478 1305484 command_runner.go:130] > 3ec20f2e
	I1218 00:29:08.926977 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:29:08.934462 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.941654 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:29:08.949245 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953111 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953171 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.953238 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:29:08.993847 1305484 command_runner.go:130] > b5213941
	I1218 00:29:08.994434 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:29:09.002229 1305484 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.011682 1305484 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:29:09.020345 1305484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025298 1305484 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025353 1305484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.025405 1305484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:29:09.072271 1305484 command_runner.go:130] > 51391683
	I1218 00:29:09.072867 1305484 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:29:09.081208 1305484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085518 1305484 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:29:09.085547 1305484 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1218 00:29:09.085554 1305484 command_runner.go:130] > Device: 259,1	Inode: 2346127     Links: 1
	I1218 00:29:09.085561 1305484 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 00:29:09.085576 1305484 command_runner.go:130] > Access: 2025-12-18 00:25:01.733890088 +0000
	I1218 00:29:09.085582 1305484 command_runner.go:130] > Modify: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085594 1305484 command_runner.go:130] > Change: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085606 1305484 command_runner.go:130] >  Birth: 2025-12-18 00:20:57.903191363 +0000
	I1218 00:29:09.085761 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:29:09.130673 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.131215 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:29:09.179276 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.179949 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:29:09.226958 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.227517 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:29:09.269182 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.269731 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:29:09.310659 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.311193 1305484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:29:09.352162 1305484 command_runner.go:130] > Certificate will not expire
	I1218 00:29:09.352228 1305484 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:29:09.352303 1305484 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:29:09.352361 1305484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:29:09.379004 1305484 cri.go:89] found id: ""
	I1218 00:29:09.379101 1305484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:29:09.386224 1305484 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 00:29:09.386247 1305484 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 00:29:09.386254 1305484 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 00:29:09.387165 1305484 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:29:09.387182 1305484 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:29:09.387261 1305484 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:29:09.396523 1305484 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:29:09.396996 1305484 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.397115 1305484 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "functional-232602" cluster setting kubeconfig missing "functional-232602" context setting]
	I1218 00:29:09.397401 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.397832 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.398029 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.398566 1305484 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1218 00:29:09.398586 1305484 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1218 00:29:09.398591 1305484 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1218 00:29:09.398599 1305484 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1218 00:29:09.398604 1305484 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1218 00:29:09.398644 1305484 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1218 00:29:09.398857 1305484 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:29:09.408050 1305484 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1218 00:29:09.408132 1305484 kubeadm.go:602] duration metric: took 20.943322ms to restartPrimaryControlPlane
	I1218 00:29:09.408155 1305484 kubeadm.go:403] duration metric: took 55.931707ms to StartCluster
	I1218 00:29:09.408213 1305484 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.408302 1305484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.409063 1305484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:29:09.409379 1305484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 00:29:09.409544 1305484 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 00:29:09.409943 1305484 addons.go:70] Setting storage-provisioner=true in profile "functional-232602"
	I1218 00:29:09.409964 1305484 addons.go:239] Setting addon storage-provisioner=true in "functional-232602"
	I1218 00:29:09.409988 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.409637 1305484 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:29:09.410125 1305484 addons.go:70] Setting default-storageclass=true in profile "functional-232602"
	I1218 00:29:09.410148 1305484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-232602"
	I1218 00:29:09.410443 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.410469 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.418864 1305484 out.go:179] * Verifying Kubernetes components...
	I1218 00:29:09.421814 1305484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:29:09.464044 1305484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 00:29:09.465759 1305484 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:29:09.465914 1305484 kapi.go:59] client config for functional-232602: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 00:29:09.466265 1305484 addons.go:239] Setting addon default-storageclass=true in "functional-232602"
	I1218 00:29:09.466296 1305484 host.go:66] Checking if "functional-232602" exists ...
	I1218 00:29:09.466740 1305484 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:29:09.466941 1305484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.466952 1305484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 00:29:09.466995 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.523535 1305484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:09.523562 1305484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 00:29:09.523638 1305484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:29:09.539603 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.550039 1305484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:29:09.631300 1305484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:29:09.666484 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:09.687810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.394630 1305484 node_ready.go:35] waiting up to 6m0s for node "functional-232602" to be "Ready" ...
	I1218 00:29:10.394645 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.394905 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.394947 1305484 retry.go:31] will retry after 177.31527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.395055 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.395073 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395086 1305484 retry.go:31] will retry after 150.104012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.395151 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.545905 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:10.572498 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.615825 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.615864 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.615882 1305484 retry.go:31] will retry after 386.236336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650773 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.650838 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.650865 1305484 retry.go:31] will retry after 280.734601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.894991 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:10.895069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:10.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:10.932808 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:10.998277 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:10.998407 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:10.998429 1305484 retry.go:31] will retry after 660.849815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.003467 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.066495 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.066548 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.066567 1305484 retry.go:31] will retry after 792.514458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.395083 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.659960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:11.722453 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.722493 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.722511 1305484 retry.go:31] will retry after 472.801155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.859919 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:11.895517 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:11.895589 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:11.895884 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:11.931975 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:11.936172 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:11.936234 1305484 retry.go:31] will retry after 583.966469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.195539 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:12.255280 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.259094 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.259131 1305484 retry.go:31] will retry after 926.212833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.395399 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.395475 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.395812 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:12.395919 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:12.520996 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:12.581638 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:12.581728 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.581762 1305484 retry.go:31] will retry after 1.65494693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:12.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:12.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:12.895402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.186032 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:13.243730 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:13.248249 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.248281 1305484 retry.go:31] will retry after 1.192911742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:13.395563 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.395681 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.395976 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:13.895848 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:13.895954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:13.896330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:14.237854 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:14.298889 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.302600 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.302641 1305484 retry.go:31] will retry after 1.5263786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.395779 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.395871 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.396209 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:14.396293 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:14.441356 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:14.508115 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:14.508165 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.508184 1305484 retry.go:31] will retry after 3.305911776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:14.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:14.895890 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:14.896219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.394975 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.395415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:15.829900 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:15.892510 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:15.892556 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.892574 1305484 retry.go:31] will retry after 3.944012673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:15.895725 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:15.895798 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:15.896127 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.394873 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.394951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.395246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:16.894968 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:16.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:16.895399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:16.895481 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:17.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:17.814960 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:17.873346 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:17.873415 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.873437 1305484 retry.go:31] will retry after 2.287204088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:17.895511 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:17.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:17.895833 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.395764 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.395845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.396148 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:18.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:18.895440 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:19.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.395328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:19.836815 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:19.891772 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895038 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:19.895109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:19.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:19.895501 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:19.895520 1305484 retry.go:31] will retry after 2.272181462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.160871 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:20.233754 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:20.233805 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.233824 1305484 retry.go:31] will retry after 9.03130365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:20.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.395392 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.395710 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:20.894916 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:20.894992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:20.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:21.395041 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.395135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.395466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:21.395525 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:21.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:21.895012 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:21.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.168810 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:22.226105 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:22.229620 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.229649 1305484 retry.go:31] will retry after 6.326012676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:22.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:22.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:22.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:22.895280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.395383 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:23.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:23.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:23.895360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:23.895414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:24.395042 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.395119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.395414 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:24.895109 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:24.895188 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:24.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.395358 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.395437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.395700 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:25.895538 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:25.895612 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:25.895906 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:25.895954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:26.395465 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.395571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.395892 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:26.895653 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:26.895735 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:26.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.395741 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.395852 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.396210 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:27.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:27.895939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:27.896273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:27.896328 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:28.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:28.556610 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:28.617128 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:28.617182 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.617202 1305484 retry.go:31] will retry after 6.797257953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:28.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:28.895668 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:28.895975 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.265354 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:29.327180 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:29.327227 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.327246 1305484 retry.go:31] will retry after 10.081474738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:29.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.395481 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.395821 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:29.895626 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:29.895701 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:29.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:30.395476 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.395558 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.395870 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:30.395928 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:30.895674 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:30.895771 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:30.896102 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.395677 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.395765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.396042 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:31.895800 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:31.895892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:31.896225 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:32.395871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.395946 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.396238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:32.396286 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:32.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:32.894971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:32.895221 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:33.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:33.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:33.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:34.894995 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:34.895096 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:34.895485 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:34.895540 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:35.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.395369 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:35.415065 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:35.470618 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:35.474707 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.474739 1305484 retry.go:31] will retry after 12.346765183s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:35.894884 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:35.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:35.895217 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:36.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:36.895031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:36.895297 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:37.395715 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.395786 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.396036 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:37.396085 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:37.895882 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:37.895957 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:37.896282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.395072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.395404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:38.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:38.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:38.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.395085 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.395413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:39.409781 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:39.473091 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:39.473144 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.473164 1305484 retry.go:31] will retry after 18.475103934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:39.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:39.895826 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:39.896182 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:39.896239 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:40.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.394986 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:40.894982 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:40.895057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:40.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.395197 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.395487 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:41.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:41.894953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:41.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:42.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.395341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:42.395398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:42.895053 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:42.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:42.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.394921 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.395030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:43.894994 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:43.895073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:43.895439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:44.395145 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:44.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:44.895223 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:44.895291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:44.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.395338 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.396157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:45.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:45.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:45.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.395277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:46.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:46.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:46.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:46.895417 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:47.395091 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.395170 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:47.821776 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:29:47.880326 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:47.883900 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.883932 1305484 retry.go:31] will retry after 18.240859758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:47.895204 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:47.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:47.895522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:48.895186 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:48.895530 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:48.895589 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:49.395004 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:49.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:49.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:49.895387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.395307 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.395385 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.395702 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:50.895512 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:50.895597 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:50.895908 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:50.895965 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:51.395762 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.395833 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.396181 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:51.894896 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:51.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:51.895266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.395005 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.395321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:52.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:52.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:52.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:53.395068 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.395156 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.395497 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:53.395555 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:53.894871 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:53.894939 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:53.895228 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.395496 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.395573 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:54.895684 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:54.895759 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:54.896113 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.394869 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.394953 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.395245 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:55.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:55.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:55.895404 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:55.895459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:56.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.395302 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:56.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:56.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:56.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.395063 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.895034 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:57.895127 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:57.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:57.948848 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:29:58.011608 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:29:58.015264 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.015303 1305484 retry.go:31] will retry after 17.396243449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:29:58.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.394927 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.395242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:29:58.395294 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:29:58.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:58.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:58.895425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.395011 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:29:59.894993 1305484 type.go:168] "Request Body" body=""
	I1218 00:29:59.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:29:59.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:00.395507 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.395593 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.395898 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:00.395950 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:00.894850 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:00.894938 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:00.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.394969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.395325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:01.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:01.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:01.895392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.395062 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.395142 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.395460 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:02.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:02.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:02.895347 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:02.895401 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:03.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.395392 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:03.894963 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:03.895041 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:03.895359 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.394991 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:04.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:04.894956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:04.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:05.395299 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.395380 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.395678 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:05.395727 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:05.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:05.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:05.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.125881 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:06.190863 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:06.190916 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.190936 1305484 retry.go:31] will retry after 24.931144034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:06.395236 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.395314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.395677 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:06.895467 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:06.895550 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:06.895878 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:07.395628 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.395697 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.395955 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:07.395997 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:07.895729 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:07.895808 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:07.896074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.395873 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.395948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.396313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:08.894868 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:08.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:08.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:09.895208 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:09.895287 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:09.895612 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:09.895672 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:10.395275 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.395353 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.395606 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:10.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:10.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:10.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.394972 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.395053 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:11.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:11.894959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:11.895219 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:12.394924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:12.395391 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:12.894984 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:12.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:12.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.395271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:13.894985 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:13.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:13.895388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.395015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.395307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:14.894872 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:14.894951 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:14.895211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:14.895252 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:15.395260 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.395713 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:15.411948 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:15.467885 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:15.471996 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.472026 1305484 retry.go:31] will retry after 23.671964263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 00:30:15.895583 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:15.895665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:15.895991 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.395769 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.395850 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.396115 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:16.894852 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:16.894935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:16.895261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:16.895324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:17.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.394932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.395290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:17.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:17.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:17.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:18.895123 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:18.895201 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:18.895524 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:18.895581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:19.395822 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.395905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.396165 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:19.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:19.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:19.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.395230 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.395313 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.395645 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:20.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:20.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:20.895295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:21.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:21.395450 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:21.895115 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:21.895196 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:21.895514 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.394905 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.395220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:22.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:22.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:22.895345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.395045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:23.895065 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:23.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:23.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:23.895477 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:24.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.394989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.395346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:24.895052 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:24.895137 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:24.895433 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.395347 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.395641 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:25.895358 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:25.895437 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:25.895746 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:25.895805 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:26.395602 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.395686 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.396014 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:26.895774 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:26.895844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:26.896146 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.394944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:27.894978 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:27.895055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:27.895365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:28.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.395236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:28.395276 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:28.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:28.895067 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:28.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.395540 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.395625 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.395953 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:29.894869 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:29.894937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:29.895190 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:30.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.395255 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.395559 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:30.395614 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:30.895284 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:30.895370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:30.895692 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.123262 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 00:30:31.181409 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.184938 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:31.185056 1305484 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:31.395353 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.395427 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.395686 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:31.895522 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:31.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:31.895971 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:32.395780 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.395853 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.396133 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:32.396184 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:32.894842 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:32.894921 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:32.895187 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.394857 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.394937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:33.894971 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:33.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:33.895325 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.395026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:34.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:34.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:34.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:34.895435 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:35.395407 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.395498 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.395839 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:35.895696 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:35.895778 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:35.896070 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.395851 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.395932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.396284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:36.894990 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:36.895074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:36.895427 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:36.895485 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:37.395134 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.395209 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.395462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:37.894924 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:37.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:37.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:38.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:38.895064 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:38.895330 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:39.144879 1305484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 00:30:39.206506 1305484 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206561 1305484 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 00:30:39.206652 1305484 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 00:30:39.209780 1305484 out.go:179] * Enabled addons: 
	I1218 00:30:39.213292 1305484 addons.go:530] duration metric: took 1m29.803748848s for enable addons: enabled=[]
	I1218 00:30:39.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:39.395343 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:39.895241 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:39.895315 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:39.895674 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.395346 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.395421 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.395699 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:40.895493 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:40.895579 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:40.895927 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:41.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.395901 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.396304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:41.396363 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:41.894996 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:41.895079 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:41.895335 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:42.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:42.895026 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:42.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.394902 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.394978 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.395300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:43.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:43.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:43.895429 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:44.395104 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.395180 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.395503 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:44.894907 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:44.894987 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:44.895277 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.394864 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.394949 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:45.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:45.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:45.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:46.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.395018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:46.395324 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:46.894957 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:46.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:46.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:47.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:47.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:47.895453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:48.394988 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.395066 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:48.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:48.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:48.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:48.895329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.394998 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.395073 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:49.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:49.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:49.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:50.395234 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.395312 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.395669 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:50.395726 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:50.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:50.895541 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:50.895800 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.395565 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.395643 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:51.895746 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:51.895820 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:51.896139 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:52.395800 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.395866 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:52.396147 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:52.894845 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:52.894930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:52.895239 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.395362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:53.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:53.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:53.895246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.394927 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.395001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:54.895019 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:54.895132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:54.895462 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:54.895517 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:55.395382 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.395459 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.395747 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:55.895567 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:55.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:55.896004 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.395794 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.395876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.396202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:56.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:56.894918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:56.895248 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:57.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.395023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:57.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:30:57.895089 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:57.895163 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:57.895506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.395224 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.395467 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:58.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:58.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:58.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.395287 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:30:59.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:30:59.894968 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:30:59.895216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:30:59.895259 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:00.395510 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.395606 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.395915 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:00.895683 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:00.895763 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:00.896072 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.395863 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.395942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.396196 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:01.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:01.894969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:01.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:01.895364 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:02.395506 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.395587 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.395926 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:02.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:02.895787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:02.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.394835 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.394918 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.395241 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:03.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:03.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:03.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:03.895409 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:04.394887 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.395203 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:04.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:04.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:04.895585 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.395452 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.395534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.395859 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:05.895595 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:05.895675 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:05.895945 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:05.895986 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:06.395824 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.395899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.396242 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:06.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:06.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:06.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.395035 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.395109 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:07.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:07.894960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:07.895283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:08.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.395097 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.395422 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:08.395475 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:08.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:08.895185 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:08.895437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.394963 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.395425 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:09.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:09.894995 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:09.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:10.395562 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:10.895006 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:10.895092 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:10.895441 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.395247 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.395326 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.395703 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:11.895773 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:11.895839 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:11.896085 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:12.395833 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.395908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.396246 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:12.396315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:12.894849 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:12.894941 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:12.895310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.394984 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.395339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:13.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:13.895004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:13.895326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.394884 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.395283 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:14.894810 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:14.894876 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:14.895171 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:14.895233 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:15.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.395266 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.395614 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:15.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:15.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:15.895319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.394906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.394976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.395230 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:16.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:16.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:16.895449 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:17.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.395260 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.395607 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:17.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:17.895160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:17.895445 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:18.894942 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:18.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:18.895357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:19.395005 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.395075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:19.395376 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:19.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:19.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:19.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.395282 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.395364 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.395694 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:20.895475 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:20.895552 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:20.895809 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:21.395604 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.395678 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.395990 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:21.396041 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:21.895659 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:21.895733 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:21.896015 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.395655 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.395728 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.395992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:22.895435 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:22.895515 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:22.895848 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:23.395649 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.395732 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:23.396134 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:23.895883 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:23.895960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:23.896252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.394837 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.395263 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:24.894847 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:24.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:24.895271 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.395082 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.395154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.395412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:25.895068 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:25.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:25.895475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:25.895531 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:26.395075 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.395488 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:26.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:26.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:26.895250 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.395036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.395377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:27.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:27.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:27.895371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:28.395072 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.395157 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:28.395459 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:28.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:28.895034 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:28.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.395100 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.395520 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:29.894938 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:29.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:29.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:30.395237 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.395365 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.395704 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:30.395760 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:30.895519 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:30.895599 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:30.895940 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.395676 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.395750 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.396048 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:31.895809 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:31.895895 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:31.896244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.394845 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.394971 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:32.894900 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:32.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:32.895326 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:33.394994 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.395070 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.395437 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:33.895135 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:33.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:33.895535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.395882 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.395954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.396208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:34.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:34.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:34.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:34.895368 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:35.395101 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.395178 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:35.895173 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:35.895249 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:35.895577 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.394992 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.395327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:36.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:36.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:36.895323 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:37.394893 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.395252 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:37.395302 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:37.894928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:37.895009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:37.895332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:38.895059 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:38.895134 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:38.895394 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:39.394962 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.395049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.395388 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:39.395443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:39.895187 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:39.895277 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:39.895635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.395270 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.395343 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.395589 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:40.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:40.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:40.895352 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.395047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.395386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:41.895073 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:41.895149 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:41.895412 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:41.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:42.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:42.895106 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:42.895183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:42.895531 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.394891 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:43.894960 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:43.895052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:43.895424 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:43.895479 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:44.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.395368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:44.895047 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:44.895117 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:44.895407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.395328 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.395422 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.395783 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:45.895608 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:45.895699 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:45.896131 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:45.896187 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:46.394880 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.395280 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:46.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:46.895051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:46.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.395116 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.395191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.395557 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:47.894966 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:47.895047 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:47.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:48.394961 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:48.395424 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:48.895132 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:48.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:48.895541 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.395006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.395291 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:49.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:49.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:49.895327 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:50.395224 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.395303 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:50.395707 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:50.895406 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:50.895483 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:50.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.395554 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.395639 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.395931 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:51.895695 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:51.895768 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:51.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:52.395729 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.395811 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.396079 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:52.396127 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:52.895894 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:52.895969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:52.896306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.395050 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.395150 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.395532 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:53.894992 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:53.895062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:53.895316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.395011 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:54.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:54.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:54.895320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:54.895366 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:55.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.395291 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.395575 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:55.894969 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:55.895061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:55.895409 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.394936 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.395338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:56.895032 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:56.895105 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:56.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:56.895458 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:57.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.395357 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:57.895074 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:57.895154 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:57.895479 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.394862 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.395279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:58.894867 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:58.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:58.895307 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:31:59.394852 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.394934 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.395284 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:31:59.395339 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:31:59.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:31:59.895849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:31:59.896110 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.395197 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.395298 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.395737 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:00.895502 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:00.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:00.895905 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:01.395709 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.395787 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.396061 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:01.396105 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:01.895861 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:01.895937 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:01.896281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.394956 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:02.894927 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:02.894996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:02.895304 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.394949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.395316 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:03.894986 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:03.895072 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:03.895410 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:03.895469 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:04.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.395044 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.395298 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:04.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:04.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:04.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.395270 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.395588 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:05.895256 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:05.895330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:05.895578 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:05.895621 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:06.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:06.894980 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:06.895071 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:06.895448 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.394974 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.395050 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.395340 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:07.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:07.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:07.895364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:08.394964 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.395043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.395361 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:08.395415 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:08.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:08.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:08.895268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.394974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.395311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:09.895046 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:09.895131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:09.895449 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:10.395311 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.395381 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.395635 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:10.395676 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:10.895273 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:10.895354 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:10.895754 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.395292 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.395374 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.395675 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:11.895376 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:11.895441 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:11.895684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:12.395437 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.395517 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.395849 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:12.395904 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:12.895550 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:12.895627 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:12.895939 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.395711 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.395791 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.396074 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:13.895885 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:13.895958 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:13.896301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.394858 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.394930 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.395206 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:14.894877 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:14.894945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:14.895220 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:14.895266 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:15.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.395306 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.395672 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:15.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:15.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:15.895349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.395013 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.395091 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:16.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:16.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:16.895366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:16.895425 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:17.394978 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:17.895274 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.395398 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:18.895113 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:18.895192 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:18.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:18.895550 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:19.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.394940 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.395195 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:19.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:19.895024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:19.895370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.395222 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.395299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.395647 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:20.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:20.894993 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:20.895294 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:21.394929 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.395303 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:21.395350 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:21.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:21.895043 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:21.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.394838 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.394910 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.395188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:22.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:22.894994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:22.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:23.395047 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.395131 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.395465 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:23.395520 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:23.894892 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:23.894964 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:23.895300 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.394977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.395051 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:24.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:24.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:24.895362 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:25.395258 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.395335 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.395602 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:25.395653 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:25.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:25.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:25.895416 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.395052 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.395371 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:26.895075 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:26.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:26.895415 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.394948 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.395389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:27.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:27.895033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:27.895356 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:27.895426 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:28.395097 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.395171 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.395489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:28.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:28.895036 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:28.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.395111 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.395193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.395536 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:29.895559 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:29.895634 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:29.895935 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:29.895990 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:30.395759 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.395836 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.396159 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:30.894851 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:30.894931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:30.895267 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.394947 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.395281 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:31.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:31.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:31.895344 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:32.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.395132 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:32.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:32.894897 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:32.894975 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:32.895317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.395060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:33.895211 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:33.895286 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:33.895620 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.394801 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.394869 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.395114 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:34.894830 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:34.894907 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:34.895223 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:34.895273 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:35.395130 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.395566 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:35.895126 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:35.895205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:35.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.394946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.395020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:36.894920 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:36.894999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:36.895341 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:36.895398 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:37.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.394969 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.395292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:37.894979 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:37.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:37.895374 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.394915 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.394990 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:38.895016 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:38.895094 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:38.895411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:38.895465 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:39.394953 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.395028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.395369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:39.895143 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:39.895225 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:39.895574 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.395286 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.395370 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.395636 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:40.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:40.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:40.895367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:41.394945 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:41.395439 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:41.894881 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:41.894976 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:41.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.394999 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.395081 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.395442 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:42.895025 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:42.895106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:42.895432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.394888 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.394966 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.395216 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:43.894899 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:43.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:43.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:43.895348 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:44.394922 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.395332 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:44.894831 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:44.894908 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:44.895175 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.395389 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.395497 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.395880 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:45.895561 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:45.895646 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:45.895997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:45.896056 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:46.395702 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.395785 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.396046 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:46.895863 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:46.895935 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:46.896257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.395439 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:47.894977 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:47.895048 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:47.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:48.395027 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.395444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:48.395498 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:48.895164 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:48.895243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:48.895582 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.395264 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.395597 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:49.895474 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:49.895557 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:49.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:50.395724 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.395800 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.396111 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:50.396169 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:50.895876 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:50.895947 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:50.896202 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.395401 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:51.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:51.895200 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:51.895548 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.395025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.395322 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:52.894946 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:52.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:52.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:52.895410 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:53.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.395162 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:53.895715 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:53.895783 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:53.896041 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.395464 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.395544 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.395863 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:54.895501 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:54.895586 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:54.895913 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:54.895971 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:55.395850 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.395924 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.396188 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:55.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:55.894972 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:55.895296 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.395019 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.395115 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.395513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:56.895193 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:56.895259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:56.895583 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:57.394937 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.395024 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.395358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:57.395413 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:32:57.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:57.895020 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:57.895384 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.395771 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.395843 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.396103 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:58.895868 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:58.895950 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:58.896279 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.394910 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.394988 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.395315 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:32:59.895060 1305484 type.go:168] "Request Body" body=""
	I1218 00:32:59.895138 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:32:59.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:32:59.895473 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:00.395531 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.395633 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.396109 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:00.894904 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:00.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:00.895313 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.394991 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.395062 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.395320 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:01.894951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:01.895027 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:01.895358 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:02.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.395021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.395373 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:02.395430 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:02.895092 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:02.895164 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:02.895428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.395039 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.395411 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:03.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:03.895093 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:03.895426 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.394889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.394960 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.395259 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:04.894914 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:04.894989 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:04.895395 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:05.395163 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.395243 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.395682 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:05.895450 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:05.895524 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:05.895784 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.395568 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.395656 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.395978 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:06.895794 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:06.895874 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:06.896211 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:06.896271 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:07.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.395285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:07.894962 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:07.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:07.895397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.394973 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.395055 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.395407 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:08.895094 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:08.895172 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:08.895469 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:09.394967 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:09.395444 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:09.895137 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:09.895212 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:09.895526 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.395178 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.395259 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.395579 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:10.895391 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:10.895474 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:10.895867 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:11.395660 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.395744 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.396081 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:11.396140 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:11.895822 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:11.895896 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:11.896157 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.394896 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.394973 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.395293 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:12.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:12.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:12.895368 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.395034 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.395107 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.395365 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:13.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:13.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:13.895385 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:13.895454 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:14.395141 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.395215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:14.895214 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:14.895295 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:14.895592 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.395316 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.395398 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.395758 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:15.895576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:15.895652 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:15.895992 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:15.896047 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:16.395754 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.396096 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:16.895867 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:16.895943 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:16.896286 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.394997 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.395084 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.395428 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:17.894891 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:17.894962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:17.895235 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:18.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.395037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:18.395414 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:18.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:18.895040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:18.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.394980 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.395272 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:19.895004 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:19.895087 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:19.895438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:20.395201 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.395308 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.395646 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:20.395698 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:20.895422 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:20.895490 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:20.895757 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.395521 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.395598 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.395947 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:21.895610 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:21.895689 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:21.896027 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:22.395778 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.395849 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.396108 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:22.396151 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:22.894879 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:22.894954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:22.895254 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.394957 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:23.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:23.895018 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:23.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.395023 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.395106 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.395432 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:24.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:24.895035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:24.895375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:24.895433 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:25.395157 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.395226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:25.895136 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:25.895218 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:25.895539 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.395250 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.395706 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:26.895464 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:26.895534 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:26.895793 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:26.895834 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:27.395582 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.395665 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.396005 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:27.895686 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:27.895765 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:27.896121 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.395755 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.395828 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.396080 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:28.895856 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:28.895931 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:28.896264 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:28.896319 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:29.394871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.394967 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.395342 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:29.895043 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:29.895118 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:29.895400 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.395313 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.395390 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.395741 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:30.895528 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:30.895610 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:30.895946 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:31.395576 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.395644 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.395889 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:31.395930 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:31.895675 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:31.895753 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:31.896082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.394834 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.394919 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:32.894964 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:32.895046 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:32.895346 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.395396 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:33.895091 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:33.895177 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:33.895502 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:33.895563 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:34.394882 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.394955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.395261 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:34.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:34.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:34.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.395153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.395506 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:35.894873 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:35.894948 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:35.895257 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:36.394950 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.395033 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.395348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:36.395402 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:36.895071 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:36.895153 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:36.895476 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.394881 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.394952 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.395268 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:37.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:37.895015 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:37.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.394918 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.395002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:38.894888 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:38.895002 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:38.895305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:38.895353 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:39.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.395014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.395364 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:39.895212 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:39.895299 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:39.895609 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.395293 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.395361 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.395613 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:40.894947 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:40.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:40.895328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:40.895383 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:41.395069 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.395147 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.395453 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:41.894936 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:41.895013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:41.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.394951 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:42.895138 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:42.895215 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:42.895542 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:42.895601 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:43.394941 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.395278 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:43.895604 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:43.895677 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:43.895977 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.395290 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.395367 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.395718 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:44.895507 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:44.895582 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:44.895842 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:44.895892 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:45.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.395038 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.395360 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:45.894958 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:45.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:45.895369 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.395070 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.395160 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.395494 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:46.894943 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:46.895019 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:46.895311 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:47.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.395069 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.395419 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:47.395483 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:47.894889 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:47.894965 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:47.895236 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.394934 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.395366 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:48.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:48.895145 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:48.895481 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:49.395814 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.395888 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.396152 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:49.396201 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:49.894944 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:49.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:49.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.395242 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.395323 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.395662 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:50.894864 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:50.894942 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:50.895212 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.394982 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.395060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:51.895127 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:51.895213 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:51.895688 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:51.895762 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:52.395524 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.395609 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.395929 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:52.895771 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:52.895845 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:52.896160 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.394898 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.395003 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.395295 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:53.894861 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:53.894932 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:53.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:54.394811 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.394887 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.395224 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:54.395284 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:54.895871 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:54.895944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:54.896276 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.395236 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.395523 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:55.894926 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:55.895000 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:55.895285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:56.394976 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:56.395441 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:56.895820 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:56.895899 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:56.896155 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.394899 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.394982 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:57.894987 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:57.895075 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:57.895413 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:58.395076 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.395146 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.395477 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:33:58.395535 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:33:58.894953 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:58.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:58.895324 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.395049 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.395125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.395417 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:33:59.894913 1305484 type.go:168] "Request Body" body=""
	I1218 00:33:59.894984 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:33:59.895292 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:00.395314 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.395415 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.395786 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:00.395854 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:00.895591 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:00.895666 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:00.896029 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.395664 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.395997 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:01.895814 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:01.895904 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:01.896249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.394968 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.395057 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.395421 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:02.895119 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:02.895193 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:02.895464 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:02.895507 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:03.395162 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.395245 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.395584 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:03.895306 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:03.895387 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:03.895714 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.395125 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.395233 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.395547 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:04.895240 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:04.895314 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:04.895659 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:04.895713 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:05.395523 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.395602 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.395951 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:05.895711 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:05.895784 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:05.896083 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.395846 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.395920 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.396255 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:06.894862 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:06.894944 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:06.895288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:07.394985 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.395319 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:07.395361 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:07.895013 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:07.895141 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:07.895473 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.395190 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.395601 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:08.895088 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:08.895159 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:08.895466 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:09.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.395035 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.395397 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:09.395453 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:09.894937 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:09.895016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:09.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.395167 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.395240 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.395490 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:10.895174 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:10.895254 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:10.895552 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:11.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.395031 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.395429 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:11.395490 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:11.895021 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:11.895089 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:11.895339 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.395645 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.395720 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.396082 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:12.895753 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:12.895830 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:12.896143 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.394854 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.394925 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.395193 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:13.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:13.895010 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:13.895299 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:13.895347 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:14.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.395375 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:14.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:14.895129 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:14.895451 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.395317 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.395394 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.395684 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:15.895487 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:15.895571 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:15.895903 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:15.895957 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:16.395670 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.395737 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.395998 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:16.895851 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:16.895945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:16.896285 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.394992 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.395074 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.395402 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:17.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:17.894981 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:17.895249 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:18.394916 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.394994 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.395317 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:18.395371 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:18.894956 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:18.895032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:18.895376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.394872 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.395266 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:19.894954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:19.895029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:19.895389 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:20.395179 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.395258 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.395604 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:20.395662 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:20.894898 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:20.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:20.895244 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.395016 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:21.894952 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:21.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:21.895353 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.394923 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.394996 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.395310 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:22.894940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:22.895014 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:22.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:22.895406 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:23.395099 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.395183 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.395522 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:23.895196 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:23.895267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:23.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.394919 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.394997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.395328 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:24.894967 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:24.895049 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:24.895386 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:24.895443 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:25.395131 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.395205 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.395456 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:25.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:25.895021 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:25.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.394940 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.395017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.395399 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:26.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:26.895045 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:26.895301 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:27.394928 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.395004 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.395326 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:27.395386 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:27.895785 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:27.895857 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:27.896201 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.394885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.394962 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.395288 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:28.894973 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:28.895060 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:28.895403 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:29.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.395527 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:29.395588 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:29.894812 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:29.894881 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:29.895140 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.395146 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.395230 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.395562 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:30.894965 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:30.895042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:30.895372 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.395008 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.395345 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:31.895039 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:31.895125 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:31.895444 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:31.895519 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:32.395185 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.395683 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:32.895139 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:32.895210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:32.895468 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.394926 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.394999 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.395329 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:33.894912 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:33.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:33.895321 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:34.394900 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.394970 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.395227 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:34.395268 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:34.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:34.895007 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:34.895354 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.395150 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.395242 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.395581 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:35.895262 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:35.895333 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:35.895655 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:36.395446 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.395526 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.395891 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:36.395954 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:36.895879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:36.896025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:36.896489 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.395256 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.395334 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.395590 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:37.894950 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:37.895028 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:37.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.395094 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.395175 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.395508 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:38.894925 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:38.894997 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:38.895273 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:38.895318 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:39.394981 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.395056 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.395382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:39.894949 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:39.895025 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:39.895350 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.395255 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.395330 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.395611 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:40.895417 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:40.895495 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:40.895856 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:40.895911 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:41.395671 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.395749 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.396075 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:41.895770 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:41.895842 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:41.896100 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.394861 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.394945 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.395270 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:42.894955 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:42.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:42.895336 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:43.394987 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.395061 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.395349 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:43.395397 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:43.894931 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:43.895006 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:43.895331 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.395078 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.395167 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.395534 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:44.894915 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:44.895001 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:44.895348 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:45.395381 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.395465 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.395835 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:45.395899 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:45.895622 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:45.895696 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:45.896010 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.395697 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.395815 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.396068 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:46.895828 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:46.895903 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:46.896238 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.394829 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.394914 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.395208 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:47.894909 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:47.894979 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:47.895256 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:47.895315 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:48.394935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.395013 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.395355 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:48.895103 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:48.895191 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:48.895572 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.395252 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.395319 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.395570 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:49.895468 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:49.895542 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:49.895868 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:49.895924 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:50.395784 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.395860 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.396189 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:50.895823 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:50.895905 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:50.896170 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.394877 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.394954 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.395305 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:51.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:51.894977 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:51.895290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:52.394890 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.394961 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.395282 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:52.395333 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:52.895035 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:52.895119 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:52.895493 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.395218 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.395297 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.395619 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:53.894885 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:53.894963 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:53.895214 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:54.394879 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.394959 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.395306 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:54.395365 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:54.894934 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:54.895022 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:54.895382 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.395135 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.395210 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.395475 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:55.894945 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:55.895037 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:55.895381 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:56.394954 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.395029 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.395367 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:56.395422 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:56.895064 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:56.895133 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:56.895393 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.394932 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.395009 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.395363 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:57.895056 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:57.895135 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:57.895491 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:58.395180 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.395253 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.395564 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:34:58.395616 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:34:58.894935 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:58.895017 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:58.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.394958 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.395042 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.395387 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:34:59.894882 1305484 type.go:168] "Request Body" body=""
	I1218 00:34:59.894955 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:34:59.895253 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:00.395263 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.395351 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.395651 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:00.395696 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:00.895585 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:00.895660 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:00.895999 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.395773 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.395844 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.396106 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:01.895887 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:01.895974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:01.896290 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.394993 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.395076 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.395438 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:02.895141 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:02.895226 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:02.895545 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:02.895597 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:03.394956 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.395032 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.395370 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:03.895085 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:03.895169 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:03.895513 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.395827 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.395892 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.396191 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:04.894906 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:04.894983 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:04.895338 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:05.395161 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.395239 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.395535 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:05.395581 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:05.894901 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:05.894974 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:05.895226 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.394955 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.395040 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.395376 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:06.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:06.895030 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:06.895377 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.395052 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.395122 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.395495 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:07.894948 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:07.895023 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:07.895351 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:07.895403 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:08.395103 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.395179 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.395500 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:08.895048 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:08.895123 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:08.895471 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.395187 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.395267 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.395657 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1218 00:35:09.895568 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:09.895676 1305484 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-232602" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1218 00:35:09.896021 1305484 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1218 00:35:09.896082 1305484 node_ready.go:55] error getting node "functional-232602" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-232602": dial tcp 192.168.49.2:8441: connect: connection refused
	I1218 00:35:10.395155 1305484 type.go:168] "Request Body" body=""
	I1218 00:35:10.395216 1305484 node_ready.go:38] duration metric: took 6m0.000503053s for node "functional-232602" to be "Ready" ...
	I1218 00:35:10.402744 1305484 out.go:203] 
	W1218 00:35:10.405748 1305484 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 00:35:10.405971 1305484 out.go:285] * 
	W1218 00:35:10.408384 1305484 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:35:10.411337 1305484 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:35:17 functional-232602 containerd[5205]: time="2025-12-18T00:35:17.705232515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.764609613Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.767396570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.774291751Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:18 functional-232602 containerd[5205]: time="2025-12-18T00:35:18.774651350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.748841940Z" level=info msg="No images store for sha256:d3f166a94538771772f2aeda8faeb235ac972e7b336df4992d5412071ea6ea51"
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.751193535Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-232602\""
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.758320825Z" level=info msg="ImageCreate event name:\"sha256:6d75aca4bf4907371f012e5f07ac5372de8ce8437f37e9634c438c36e1e883ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:19 functional-232602 containerd[5205]: time="2025-12-18T00:35:19.758874716Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.591512381Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.594015209Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.596048115Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 18 00:35:20 functional-232602 containerd[5205]: time="2025-12-18T00:35:20.608020639Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.562936969Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.565290492Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.568554895Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.575271348Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.716013099Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.718216414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.727189899Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.727523709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.890060584Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.892826454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.899882107Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:35:21 functional-232602 containerd[5205]: time="2025-12-18T00:35:21.900672596Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:35:25.942677    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:25.943105    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:25.944736    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:25.945223    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:35:25.946940    9317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:35:25 up  7:17,  0 user,  load average: 0.62, 0.34, 0.67
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:35:22 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 18 00:35:23 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:23 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:23 functional-232602 kubelet[9096]: E1218 00:35:23.213947    9096 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 18 00:35:23 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:23 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:23 functional-232602 kubelet[9190]: E1218 00:35:23.956375    9190 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:23 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:24 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 18 00:35:24 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:24 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:24 functional-232602 kubelet[9211]: E1218 00:35:24.702691    9211 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:24 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:24 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:35:25 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 18 00:35:25 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:25 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:35:25 functional-232602 kubelet[9232]: E1218 00:35:25.459724    9232 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:35:25 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:35:25 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (413.509618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (736.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 00:38:25.214762 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:40:04.395310 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:41:27.464762 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:43:25.214490 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:45:04.400812 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.084813634s)

                                                
                                                
-- stdout --
	* [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000096267s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-232602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.086070906s for "functional-232602" cluster.
I1218 00:47:41.081587 1261148 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (335.929585ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh     │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image   │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete  │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start   │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ start   │ -p functional-232602 --alsologtostderr -v=8                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add minikube-local-cache-test:functional-232602                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache delete minikube-local-cache-test:functional-232602                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl images                                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-232602 cache reload                                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ kubectl │ functional-232602 kubectl -- --context functional-232602 get pods                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ start   │ -p functional-232602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:35:27.044902 1311248 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:35:27.045002 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045006 1311248 out.go:374] Setting ErrFile to fd 2...
	I1218 00:35:27.045010 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045249 1311248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:35:27.045606 1311248 out.go:368] Setting JSON to false
	I1218 00:35:27.046406 1311248 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26273,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:35:27.046458 1311248 start.go:143] virtualization:  
	I1218 00:35:27.049930 1311248 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:35:27.052925 1311248 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:35:27.053012 1311248 notify.go:221] Checking for updates...
	I1218 00:35:27.058856 1311248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:35:27.061872 1311248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:35:27.064792 1311248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:35:27.067743 1311248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:35:27.070676 1311248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:35:27.074096 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:27.074190 1311248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:35:27.106641 1311248 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:35:27.106748 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.164302 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.154715728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.164392 1311248 docker.go:319] overlay module found
	I1218 00:35:27.167427 1311248 out.go:179] * Using the docker driver based on existing profile
	I1218 00:35:27.170281 1311248 start.go:309] selected driver: docker
	I1218 00:35:27.170292 1311248 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.170444 1311248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:35:27.170546 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.230048 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.221277832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.230469 1311248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 00:35:27.230491 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:27.230542 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:27.230580 1311248 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.235511 1311248 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:35:27.238271 1311248 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:35:27.241192 1311248 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:35:27.243943 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:27.243991 1311248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:35:27.243999 1311248 cache.go:65] Caching tarball of preloaded images
	I1218 00:35:27.244040 1311248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:35:27.244087 1311248 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:35:27.244096 1311248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:35:27.244211 1311248 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:35:27.263574 1311248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:35:27.263584 1311248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:35:27.263598 1311248 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:35:27.263628 1311248 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:35:27.263679 1311248 start.go:364] duration metric: took 35.445µs to acquireMachinesLock for "functional-232602"
	I1218 00:35:27.263697 1311248 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:35:27.263701 1311248 fix.go:54] fixHost starting: 
	I1218 00:35:27.263946 1311248 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:35:27.280222 1311248 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:35:27.280243 1311248 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:35:27.283327 1311248 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:35:27.283352 1311248 machine.go:94] provisionDockerMachine start ...
	I1218 00:35:27.283428 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.299920 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.300231 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.300238 1311248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:35:27.452356 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.452370 1311248 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:35:27.452432 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.473471 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.473816 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.473825 1311248 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:35:27.640067 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.640142 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.667013 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.667323 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.667342 1311248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:35:27.820945 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:35:27.820961 1311248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:35:27.820980 1311248 ubuntu.go:190] setting up certificates
	I1218 00:35:27.820989 1311248 provision.go:84] configureAuth start
	I1218 00:35:27.821051 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:27.838852 1311248 provision.go:143] copyHostCerts
	I1218 00:35:27.838916 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:35:27.838924 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:35:27.838994 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:35:27.839097 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:35:27.839100 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:35:27.839128 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:35:27.839186 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:35:27.839190 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:35:27.839213 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:35:27.839265 1311248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:35:28.109890 1311248 provision.go:177] copyRemoteCerts
	I1218 00:35:28.109947 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:35:28.109996 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.127232 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.232344 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:35:28.250086 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:35:28.268448 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:35:28.286339 1311248 provision.go:87] duration metric: took 465.326862ms to configureAuth
	I1218 00:35:28.286357 1311248 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:35:28.286550 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:28.286556 1311248 machine.go:97] duration metric: took 1.003199883s to provisionDockerMachine
	I1218 00:35:28.286562 1311248 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:35:28.286572 1311248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:35:28.286620 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:35:28.286663 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.304025 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.412869 1311248 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:35:28.416834 1311248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:35:28.416854 1311248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:35:28.416865 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:35:28.416921 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:35:28.417025 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:35:28.417099 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:35:28.417168 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:35:28.424798 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:28.442733 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:35:28.462911 1311248 start.go:296] duration metric: took 176.334186ms for postStartSetup
	I1218 00:35:28.462983 1311248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:35:28.463039 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.480489 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.585769 1311248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:35:28.590837 1311248 fix.go:56] duration metric: took 1.327128154s for fixHost
	I1218 00:35:28.590854 1311248 start.go:83] releasing machines lock for "functional-232602", held for 1.327167711s
	I1218 00:35:28.590944 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:28.607738 1311248 ssh_runner.go:195] Run: cat /version.json
	I1218 00:35:28.607789 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.608049 1311248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:35:28.608095 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.626689 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.634380 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.732432 1311248 ssh_runner.go:195] Run: systemctl --version
	I1218 00:35:28.823477 1311248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 00:35:28.828399 1311248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:35:28.828467 1311248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:35:28.836277 1311248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:35:28.836291 1311248 start.go:496] detecting cgroup driver to use...
	I1218 00:35:28.836322 1311248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:35:28.836377 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:35:28.852038 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:35:28.865568 1311248 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:35:28.865634 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:35:28.881324 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:35:28.894482 1311248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:35:29.019814 1311248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:35:29.139455 1311248 docker.go:234] disabling docker service ...
	I1218 00:35:29.139511 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:35:29.157302 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:35:29.172520 1311248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:35:29.290798 1311248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:35:29.409846 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:35:29.423039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:35:29.438313 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:35:29.447458 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:35:29.457161 1311248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:35:29.457221 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:35:29.466703 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.475761 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:35:29.484925 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.493811 1311248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:35:29.502125 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:35:29.511205 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:35:29.520548 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:35:29.530343 1311248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:35:29.538157 1311248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:35:29.545765 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:29.664409 1311248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:35:29.789454 1311248 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:35:29.789537 1311248 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:35:29.793414 1311248 start.go:564] Will wait 60s for crictl version
	I1218 00:35:29.793467 1311248 ssh_runner.go:195] Run: which crictl
	I1218 00:35:29.796922 1311248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:35:29.821478 1311248 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:35:29.821534 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.845973 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.874969 1311248 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:35:29.877886 1311248 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:35:29.897397 1311248 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:35:29.909164 1311248 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1218 00:35:29.912023 1311248 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:35:29.912156 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:29.912246 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.959601 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.959615 1311248 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:35:29.959670 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.987018 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.987029 1311248 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:35:29.987035 1311248 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:35:29.987151 1311248 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:35:29.987219 1311248 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:35:30.033188 1311248 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1218 00:35:30.033262 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:30.033272 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:30.033285 1311248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:35:30.033322 1311248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:35:30.033459 1311248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:35:30.033555 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:35:30.044133 1311248 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:35:30.044224 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:35:30.053566 1311248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:35:30.069600 1311248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:35:30.086185 1311248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1218 00:35:30.100953 1311248 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:35:30.105204 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:30.229133 1311248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:35:30.643842 1311248 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:35:30.643853 1311248 certs.go:195] generating shared ca certs ...
	I1218 00:35:30.643868 1311248 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:35:30.644040 1311248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:35:30.644079 1311248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:35:30.644085 1311248 certs.go:257] generating profile certs ...
	I1218 00:35:30.644187 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:35:30.644248 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:35:30.644287 1311248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:35:30.644391 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:35:30.644420 1311248 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:35:30.644426 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:35:30.644455 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:35:30.644481 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:35:30.644512 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:35:30.644557 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:30.645271 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:35:30.667963 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:35:30.688789 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:35:30.707638 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:35:30.727172 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:35:30.745582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:35:30.763537 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:35:30.781521 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:35:30.799255 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:35:30.816582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:35:30.835230 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:35:30.852513 1311248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:35:30.865555 1311248 ssh_runner.go:195] Run: openssl version
	I1218 00:35:30.871911 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.879397 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:35:30.886681 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890109 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890169 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.930894 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:35:30.938142 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.945286 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:35:30.952538 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956151 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956245 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.997157 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:35:31.005056 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.014006 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:35:31.022034 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025894 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025961 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.067200 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:35:31.075278 1311248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:35:31.079306 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:35:31.123391 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:35:31.165879 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:35:31.208281 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:35:31.249146 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:35:31.290212 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:35:31.331444 1311248 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:31.331522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:35:31.331580 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.356945 1311248 cri.go:89] found id: ""
	I1218 00:35:31.357003 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:35:31.364788 1311248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:35:31.364798 1311248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:35:31.364876 1311248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:35:31.372428 1311248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.372951 1311248 kubeconfig.go:125] found "functional-232602" server: "https://192.168.49.2:8441"
	I1218 00:35:31.374199 1311248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:35:31.382218 1311248 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-18 00:20:57.479200490 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-18 00:35:30.095938034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1218 00:35:31.382230 1311248 kubeadm.go:1161] stopping kube-system containers ...
	I1218 00:35:31.382240 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1218 00:35:31.382293 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.418635 1311248 cri.go:89] found id: ""
	I1218 00:35:31.418695 1311248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 00:35:31.437319 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:35:31.447695 1311248 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 18 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 18 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 18 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 18 00:25 /etc/kubernetes/scheduler.conf
	
	I1218 00:35:31.447757 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:35:31.455511 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:35:31.463139 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.463194 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:35:31.470550 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.478132 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.478200 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.485959 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:35:31.493702 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.493757 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:35:31.501195 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:35:31.509596 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:31.563212 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:32.882945 1311248 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.319707666s)
	I1218 00:35:32.883005 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.109967 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.178681 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.229970 1311248 api_server.go:52] waiting for apiserver process to appear ...
	I1218 00:35:33.230040 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:33.730927 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.230378 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.730284 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.230343 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.730919 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.730993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.230539 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.731124 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.230838 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.730863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.230678 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.730230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.230236 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.731068 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.231109 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.730288 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.230203 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.730234 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.230141 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.730185 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.231143 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.730804 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.237230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.230803 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.730882 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.230533 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.731147 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.230905 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.730814 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.230754 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.730337 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.230375 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.731190 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.230987 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.731023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.230495 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.730322 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.230929 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.730922 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.231058 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.730458 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.230148 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.230494 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.731136 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.231080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.730219 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.230880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.730261 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.230265 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.730444 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.230228 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.730965 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.231030 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.730793 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.231094 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.730432 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.230277 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.730969 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.230206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.731080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.230777 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.730718 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.231042 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.730199 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.230478 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.730807 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.230613 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.730187 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.231163 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.731095 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.231010 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.731081 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.230167 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.730331 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.230144 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.730362 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.230993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.230791 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.731035 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.230946 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.730274 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.230238 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.730202 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.231089 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.730821 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.230480 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.730348 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.230188 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.730212 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.230315 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.730113 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.231120 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.730951 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.230491 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.730452 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.230231 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.730205 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.230525 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.230233 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.731067 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.231079 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.730956 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.230990 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.730196 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.230863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.730884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.230380 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.730826 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.731192 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.230615 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.730900 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.230553 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.730134 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:33.230238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:33.230314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:33.258458 1311248 cri.go:89] found id: ""
	I1218 00:36:33.258472 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.258484 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:33.258490 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:33.258562 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:33.283965 1311248 cri.go:89] found id: ""
	I1218 00:36:33.283979 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.283986 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:33.283991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:33.284048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:33.308663 1311248 cri.go:89] found id: ""
	I1218 00:36:33.308678 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.308693 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:33.308699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:33.308760 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:33.337762 1311248 cri.go:89] found id: ""
	I1218 00:36:33.337775 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.337783 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:33.337788 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:33.337852 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:33.366489 1311248 cri.go:89] found id: ""
	I1218 00:36:33.366503 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.366510 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:33.366515 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:33.366574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:33.401983 1311248 cri.go:89] found id: ""
	I1218 00:36:33.401998 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.402005 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:33.402010 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:33.402067 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:33.436853 1311248 cri.go:89] found id: ""
	I1218 00:36:33.436867 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.436874 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:33.436883 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:33.436893 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:33.504087 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:33.504097 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:33.504107 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:33.570523 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:33.570549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:33.607484 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:33.607500 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:33.664867 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:33.664884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.181388 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:36.191464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:36.191521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:36.214848 1311248 cri.go:89] found id: ""
	I1218 00:36:36.214863 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.214870 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:36.214876 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:36.214933 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:36.241311 1311248 cri.go:89] found id: ""
	I1218 00:36:36.241324 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.241331 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:36.241336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:36.241394 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:36.265257 1311248 cri.go:89] found id: ""
	I1218 00:36:36.265271 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.265279 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:36.265284 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:36.265343 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:36.288492 1311248 cri.go:89] found id: ""
	I1218 00:36:36.288506 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.288513 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:36.288518 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:36.288574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:36.316558 1311248 cri.go:89] found id: ""
	I1218 00:36:36.316573 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.316580 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:36.316585 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:36.316664 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:36.341952 1311248 cri.go:89] found id: ""
	I1218 00:36:36.341966 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.341973 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:36.341979 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:36.342037 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:36.365945 1311248 cri.go:89] found id: ""
	I1218 00:36:36.365959 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.365966 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:36.365974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:36.365983 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:36.426123 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:36.426142 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.444123 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:36.444140 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:36.509193 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:36.509204 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:36.509214 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:36.571649 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:36.571667 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.103696 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:39.113703 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:39.113762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:39.141856 1311248 cri.go:89] found id: ""
	I1218 00:36:39.141870 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.141878 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:39.141883 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:39.141944 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:39.170038 1311248 cri.go:89] found id: ""
	I1218 00:36:39.170052 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.170101 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:39.170107 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:39.170172 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:39.199014 1311248 cri.go:89] found id: ""
	I1218 00:36:39.199028 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.199035 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:39.199041 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:39.199101 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:39.226392 1311248 cri.go:89] found id: ""
	I1218 00:36:39.226414 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.226422 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:39.226427 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:39.226493 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:39.251905 1311248 cri.go:89] found id: ""
	I1218 00:36:39.251920 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.251927 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:39.251932 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:39.251992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:39.276915 1311248 cri.go:89] found id: ""
	I1218 00:36:39.276937 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.276944 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:39.276949 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:39.277007 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:39.301520 1311248 cri.go:89] found id: ""
	I1218 00:36:39.301534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.301542 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:39.301551 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:39.301560 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:39.364240 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:39.364259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.394082 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:39.394098 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:39.460886 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:39.460907 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:39.477258 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:39.477273 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:39.547172 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.048213 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:42.059442 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:42.059521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:42.095887 1311248 cri.go:89] found id: ""
	I1218 00:36:42.095903 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.095911 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:42.095917 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:42.095987 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:42.126738 1311248 cri.go:89] found id: ""
	I1218 00:36:42.126756 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.126763 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:42.126769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:42.126846 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:42.183895 1311248 cri.go:89] found id: ""
	I1218 00:36:42.183916 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.183924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:42.183931 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:42.184005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:42.217296 1311248 cri.go:89] found id: ""
	I1218 00:36:42.217313 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.217320 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:42.217333 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:42.217410 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:42.248021 1311248 cri.go:89] found id: ""
	I1218 00:36:42.248038 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.248065 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:42.248071 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:42.248143 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:42.278624 1311248 cri.go:89] found id: ""
	I1218 00:36:42.278650 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.278658 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:42.278664 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:42.278732 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:42.306575 1311248 cri.go:89] found id: ""
	I1218 00:36:42.306589 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.306604 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:42.306613 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:42.306622 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:42.366835 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:42.366859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:42.381793 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:42.381810 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:42.478588 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.478598 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:42.478608 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:42.541093 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:42.541114 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:45.069751 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:45.106091 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:45.106161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:45.152078 1311248 cri.go:89] found id: ""
	I1218 00:36:45.152105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.152113 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:45.152120 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:45.152202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:45.228849 1311248 cri.go:89] found id: ""
	I1218 00:36:45.228866 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.228874 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:45.228881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:45.229017 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:45.284605 1311248 cri.go:89] found id: ""
	I1218 00:36:45.284640 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.284648 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:45.284654 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:45.284773 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:45.318439 1311248 cri.go:89] found id: ""
	I1218 00:36:45.318454 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.318461 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:45.318467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:45.318532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:45.348962 1311248 cri.go:89] found id: ""
	I1218 00:36:45.348976 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.348984 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:45.348990 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:45.349055 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:45.378098 1311248 cri.go:89] found id: ""
	I1218 00:36:45.378112 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.378119 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:45.378125 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:45.378227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:45.435291 1311248 cri.go:89] found id: ""
	I1218 00:36:45.435311 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.435318 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:45.435335 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:45.435362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:45.505552 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:45.505571 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:45.523778 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:45.523794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:45.592584 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:45.592594 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:45.592606 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:45.658999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:45.659018 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:48.186749 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:48.197169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:48.197230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:48.222369 1311248 cri.go:89] found id: ""
	I1218 00:36:48.222383 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.222390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:48.222396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:48.222459 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:48.247132 1311248 cri.go:89] found id: ""
	I1218 00:36:48.247146 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.247153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:48.247158 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:48.247217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:48.272441 1311248 cri.go:89] found id: ""
	I1218 00:36:48.272455 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.272462 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:48.272467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:48.272526 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:48.302640 1311248 cri.go:89] found id: ""
	I1218 00:36:48.302655 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.302662 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:48.302679 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:48.302737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:48.329411 1311248 cri.go:89] found id: ""
	I1218 00:36:48.329425 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.329433 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:48.329438 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:48.329497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:48.358419 1311248 cri.go:89] found id: ""
	I1218 00:36:48.358433 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.358440 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:48.358445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:48.358503 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:48.383182 1311248 cri.go:89] found id: ""
	I1218 00:36:48.383195 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.383203 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:48.383210 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:48.383220 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:48.451796 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:48.451815 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:48.467080 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:48.467096 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:48.533083 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:48.533092 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:48.533103 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:48.596920 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:48.596940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:51.124756 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:51.135594 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:51.135659 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:51.164133 1311248 cri.go:89] found id: ""
	I1218 00:36:51.164148 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.164156 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:51.164161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:51.164226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:51.190200 1311248 cri.go:89] found id: ""
	I1218 00:36:51.190215 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.190222 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:51.190228 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:51.190291 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:51.216170 1311248 cri.go:89] found id: ""
	I1218 00:36:51.216187 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.216194 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:51.216200 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:51.216263 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:51.246031 1311248 cri.go:89] found id: ""
	I1218 00:36:51.246045 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.246052 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:51.246058 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:51.246122 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:51.278864 1311248 cri.go:89] found id: ""
	I1218 00:36:51.278878 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.278885 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:51.278890 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:51.278963 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:51.303118 1311248 cri.go:89] found id: ""
	I1218 00:36:51.303132 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.303139 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:51.303144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:51.303202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:51.328091 1311248 cri.go:89] found id: ""
	I1218 00:36:51.328105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.328112 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:51.328120 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:51.328130 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:51.385226 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:51.385249 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:51.400951 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:51.400967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:51.479293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:51.479304 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:51.479315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:51.541268 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:51.541288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.069293 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:54.080067 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:54.080153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:54.106375 1311248 cri.go:89] found id: ""
	I1218 00:36:54.106390 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.106402 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:54.106408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:54.106467 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:54.131767 1311248 cri.go:89] found id: ""
	I1218 00:36:54.131781 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.131788 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:54.131793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:54.131850 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:54.157519 1311248 cri.go:89] found id: ""
	I1218 00:36:54.157534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.157541 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:54.157546 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:54.157606 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:54.182381 1311248 cri.go:89] found id: ""
	I1218 00:36:54.182396 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.182403 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:54.182408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:54.182478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:54.211219 1311248 cri.go:89] found id: ""
	I1218 00:36:54.211234 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.211241 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:54.211247 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:54.211323 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:54.243605 1311248 cri.go:89] found id: ""
	I1218 00:36:54.243627 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.243634 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:54.243640 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:54.243710 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:54.268614 1311248 cri.go:89] found id: ""
	I1218 00:36:54.268648 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.268655 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:54.268664 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:54.268675 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:54.332655 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:54.332668 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:54.332679 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:54.396896 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:54.396916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.440350 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:54.440371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:54.503158 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:54.503178 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.019672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:57.030198 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:57.030268 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:57.059845 1311248 cri.go:89] found id: ""
	I1218 00:36:57.059859 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.059866 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:57.059872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:57.059939 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:57.086203 1311248 cri.go:89] found id: ""
	I1218 00:36:57.086217 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.086224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:57.086229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:57.086326 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:57.115321 1311248 cri.go:89] found id: ""
	I1218 00:36:57.115335 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.115342 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:57.115347 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:57.115416 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:57.141717 1311248 cri.go:89] found id: ""
	I1218 00:36:57.141731 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.141738 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:57.141743 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:57.141801 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:57.166376 1311248 cri.go:89] found id: ""
	I1218 00:36:57.166389 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.166396 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:57.166400 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:57.166470 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:57.194461 1311248 cri.go:89] found id: ""
	I1218 00:36:57.194475 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.194494 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:57.194500 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:57.194557 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:57.219267 1311248 cri.go:89] found id: ""
	I1218 00:36:57.219280 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.219287 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:57.219295 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:57.219305 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:57.274913 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:57.274932 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.290015 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:57.290032 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:57.353493 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:57.353504 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:57.353514 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:57.424372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:57.424400 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:59.955778 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:59.965801 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:59.965861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:59.993708 1311248 cri.go:89] found id: ""
	I1218 00:36:59.993722 1311248 logs.go:282] 0 containers: []
	W1218 00:36:59.993729 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:59.993734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:59.993792 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:00.055250 1311248 cri.go:89] found id: ""
	I1218 00:37:00.055266 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.055274 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:00.055280 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:00.055388 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:00.117792 1311248 cri.go:89] found id: ""
	I1218 00:37:00.117810 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.117818 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:00.117824 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:00.117903 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:00.170362 1311248 cri.go:89] found id: ""
	I1218 00:37:00.170378 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.170394 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:00.170401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:00.170482 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:00.229984 1311248 cri.go:89] found id: ""
	I1218 00:37:00.230002 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.230010 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:00.230015 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:00.230094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:00.264809 1311248 cri.go:89] found id: ""
	I1218 00:37:00.264826 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.264833 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:00.264839 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:00.264908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:00.313700 1311248 cri.go:89] found id: ""
	I1218 00:37:00.313718 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.313725 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:00.313734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:00.313747 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:00.390802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:00.390825 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:00.428189 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:00.428207 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:00.494729 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:00.494750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:00.511226 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:00.511245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:00.579855 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.080114 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:03.090701 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:03.090768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:03.123581 1311248 cri.go:89] found id: ""
	I1218 00:37:03.123596 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.123603 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:03.123608 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:03.123666 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:03.148602 1311248 cri.go:89] found id: ""
	I1218 00:37:03.148615 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.148657 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:03.148662 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:03.148733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:03.174826 1311248 cri.go:89] found id: ""
	I1218 00:37:03.174840 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.174848 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:03.174853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:03.174927 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:03.200912 1311248 cri.go:89] found id: ""
	I1218 00:37:03.200926 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.200933 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:03.200939 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:03.200998 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:03.226151 1311248 cri.go:89] found id: ""
	I1218 00:37:03.226166 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.226173 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:03.226179 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:03.226237 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:03.253785 1311248 cri.go:89] found id: ""
	I1218 00:37:03.253799 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.253806 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:03.253812 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:03.253878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:03.279482 1311248 cri.go:89] found id: ""
	I1218 00:37:03.279495 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.279502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:03.279510 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:03.279521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:03.294545 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:03.294563 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:03.360050 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.360059 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:03.360071 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:03.423132 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:03.423151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:03.461805 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:03.461820 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.018802 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:06.030336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:06.030406 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:06.056426 1311248 cri.go:89] found id: ""
	I1218 00:37:06.056440 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.056447 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:06.056453 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:06.056513 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:06.086319 1311248 cri.go:89] found id: ""
	I1218 00:37:06.086333 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.086341 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:06.086346 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:06.086413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:06.112062 1311248 cri.go:89] found id: ""
	I1218 00:37:06.112077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.112084 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:06.112089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:06.112157 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:06.137317 1311248 cri.go:89] found id: ""
	I1218 00:37:06.137331 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.137344 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:06.137351 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:06.137419 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:06.165090 1311248 cri.go:89] found id: ""
	I1218 00:37:06.165104 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.165111 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:06.165116 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:06.165174 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:06.190738 1311248 cri.go:89] found id: ""
	I1218 00:37:06.190753 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.190759 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:06.190765 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:06.190822 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:06.215038 1311248 cri.go:89] found id: ""
	I1218 00:37:06.215066 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.215075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:06.215083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:06.215094 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.270893 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:06.270915 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:06.285817 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:06.285834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:06.354768 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:06.354777 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:06.354787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:06.416937 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:06.416957 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:08.951149 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:08.961238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:08.961297 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:08.985900 1311248 cri.go:89] found id: ""
	I1218 00:37:08.985916 1311248 logs.go:282] 0 containers: []
	W1218 00:37:08.985923 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:08.985928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:08.985993 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:09.016022 1311248 cri.go:89] found id: ""
	I1218 00:37:09.016036 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.016043 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:09.016048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:09.016106 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:09.040820 1311248 cri.go:89] found id: ""
	I1218 00:37:09.040841 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.040849 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:09.040853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:09.040912 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:09.065452 1311248 cri.go:89] found id: ""
	I1218 00:37:09.065466 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.065473 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:09.065478 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:09.065539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:09.095062 1311248 cri.go:89] found id: ""
	I1218 00:37:09.095077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.095083 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:09.095089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:09.095151 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:09.120274 1311248 cri.go:89] found id: ""
	I1218 00:37:09.120287 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.120294 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:09.120300 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:09.120366 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:09.144652 1311248 cri.go:89] found id: ""
	I1218 00:37:09.144667 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.144674 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:09.144683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:09.144700 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:09.159355 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:09.159371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:09.224560 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:09.224571 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:09.224582 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:09.286931 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:09.286951 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:09.318873 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:09.318888 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:11.876699 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:11.887524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:11.887583 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:11.913617 1311248 cri.go:89] found id: ""
	I1218 00:37:11.913631 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.913638 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:11.913643 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:11.913701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:11.942203 1311248 cri.go:89] found id: ""
	I1218 00:37:11.942219 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.942226 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:11.942231 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:11.942292 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:11.967671 1311248 cri.go:89] found id: ""
	I1218 00:37:11.967685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.967692 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:11.967697 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:11.967766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:11.992422 1311248 cri.go:89] found id: ""
	I1218 00:37:11.992437 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.992443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:11.992448 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:11.992505 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:12.031034 1311248 cri.go:89] found id: ""
	I1218 00:37:12.031049 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.031056 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:12.031061 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:12.031119 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:12.057654 1311248 cri.go:89] found id: ""
	I1218 00:37:12.057669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.057677 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:12.057682 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:12.057764 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:12.082063 1311248 cri.go:89] found id: ""
	I1218 00:37:12.082078 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.082084 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:12.082092 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:12.082102 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:12.111103 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:12.111119 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:12.168426 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:12.168446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:12.183407 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:12.183423 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:12.251784 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:12.251803 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:12.251814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:14.823080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:14.834459 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:14.834525 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:14.860258 1311248 cri.go:89] found id: ""
	I1218 00:37:14.860272 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.860278 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:14.860283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:14.860341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:14.884703 1311248 cri.go:89] found id: ""
	I1218 00:37:14.884722 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.884729 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:14.884734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:14.884794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:14.909031 1311248 cri.go:89] found id: ""
	I1218 00:37:14.909046 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.909054 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:14.909059 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:14.909130 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:14.934504 1311248 cri.go:89] found id: ""
	I1218 00:37:14.934518 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.934525 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:14.934531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:14.934590 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:14.965623 1311248 cri.go:89] found id: ""
	I1218 00:37:14.965638 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.965646 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:14.965651 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:14.965718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:14.991607 1311248 cri.go:89] found id: ""
	I1218 00:37:14.991623 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.991631 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:14.991636 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:14.991711 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:15.027331 1311248 cri.go:89] found id: ""
	I1218 00:37:15.027347 1311248 logs.go:282] 0 containers: []
	W1218 00:37:15.027355 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:15.027364 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:15.027376 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:15.102509 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:15.102519 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:15.102530 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:15.167080 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:15.167101 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:15.200488 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:15.200504 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:15.261320 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:15.261342 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:17.777092 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:17.788005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:17.788070 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:17.813820 1311248 cri.go:89] found id: ""
	I1218 00:37:17.813834 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.813841 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:17.813846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:17.813906 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:17.841574 1311248 cri.go:89] found id: ""
	I1218 00:37:17.841588 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.841605 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:17.841610 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:17.841679 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:17.865628 1311248 cri.go:89] found id: ""
	I1218 00:37:17.865644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.865650 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:17.865656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:17.865713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:17.891259 1311248 cri.go:89] found id: ""
	I1218 00:37:17.891273 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.891289 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:17.891295 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:17.891363 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:17.918377 1311248 cri.go:89] found id: ""
	I1218 00:37:17.918391 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.918398 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:17.918403 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:17.918461 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:17.948139 1311248 cri.go:89] found id: ""
	I1218 00:37:17.948171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.948178 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:17.948183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:17.948251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:17.971855 1311248 cri.go:89] found id: ""
	I1218 00:37:17.971869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.971876 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:17.971884 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:17.971894 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:18.026594 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:18.026614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:18.042303 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:18.042328 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:18.108683 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:18.108704 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:18.108729 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:18.172657 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:18.172676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:20.704818 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:20.715060 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:20.715120 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:20.741147 1311248 cri.go:89] found id: ""
	I1218 00:37:20.741161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.741168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:20.741174 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:20.741231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:20.765846 1311248 cri.go:89] found id: ""
	I1218 00:37:20.765860 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.765867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:20.765872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:20.765930 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:20.795338 1311248 cri.go:89] found id: ""
	I1218 00:37:20.795351 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.795358 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:20.795364 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:20.795421 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:20.823054 1311248 cri.go:89] found id: ""
	I1218 00:37:20.823068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.823075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:20.823080 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:20.823137 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:20.848186 1311248 cri.go:89] found id: ""
	I1218 00:37:20.848200 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.848208 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:20.848213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:20.848278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:20.872642 1311248 cri.go:89] found id: ""
	I1218 00:37:20.872656 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.872662 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:20.872668 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:20.872771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:20.897151 1311248 cri.go:89] found id: ""
	I1218 00:37:20.897165 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.897172 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:20.897180 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:20.897190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:20.951948 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:20.951968 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:20.966927 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:20.966943 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:21.033275 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:21.033286 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:21.033296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:21.096425 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:21.096445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.624716 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:23.635084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:23.635160 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:23.668648 1311248 cri.go:89] found id: ""
	I1218 00:37:23.668662 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.668670 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:23.668675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:23.668755 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:23.700454 1311248 cri.go:89] found id: ""
	I1218 00:37:23.700468 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.700475 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:23.700480 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:23.700538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:23.732021 1311248 cri.go:89] found id: ""
	I1218 00:37:23.732035 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.732043 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:23.732048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:23.732124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:23.760854 1311248 cri.go:89] found id: ""
	I1218 00:37:23.760868 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.760875 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:23.760881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:23.760942 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:23.786164 1311248 cri.go:89] found id: ""
	I1218 00:37:23.786178 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.786185 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:23.786189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:23.786248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:23.811196 1311248 cri.go:89] found id: ""
	I1218 00:37:23.811220 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.811229 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:23.811234 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:23.811300 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:23.835282 1311248 cri.go:89] found id: ""
	I1218 00:37:23.835297 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.835314 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:23.835323 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:23.835334 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:23.899950 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:23.899970 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:23.899981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:23.966454 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:23.966474 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.994564 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:23.994580 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:24.052734 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:24.052755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.568298 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:26.578561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:26.578622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:26.602733 1311248 cri.go:89] found id: ""
	I1218 00:37:26.602747 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.602755 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:26.602761 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:26.602826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:26.631092 1311248 cri.go:89] found id: ""
	I1218 00:37:26.631106 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.631113 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:26.631118 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:26.631180 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:26.677513 1311248 cri.go:89] found id: ""
	I1218 00:37:26.677528 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.677536 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:26.677541 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:26.677608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:26.712071 1311248 cri.go:89] found id: ""
	I1218 00:37:26.712085 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.712093 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:26.712100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:26.712167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:26.738769 1311248 cri.go:89] found id: ""
	I1218 00:37:26.738783 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.738790 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:26.738795 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:26.738857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:26.764344 1311248 cri.go:89] found id: ""
	I1218 00:37:26.764358 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.764365 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:26.764370 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:26.764428 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:26.790276 1311248 cri.go:89] found id: ""
	I1218 00:37:26.790290 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.790297 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:26.790305 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:26.790315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:26.845607 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:26.845626 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.861063 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:26.861080 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:26.931574 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:26.931584 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:26.931595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:26.998426 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:26.998445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:29.540997 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:29.551044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:29.551103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:29.575146 1311248 cri.go:89] found id: ""
	I1218 00:37:29.575161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.575168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:29.575173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:29.575230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:29.599039 1311248 cri.go:89] found id: ""
	I1218 00:37:29.599052 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.599059 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:29.599064 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:29.599123 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:29.623971 1311248 cri.go:89] found id: ""
	I1218 00:37:29.623985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.623993 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:29.623998 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:29.624057 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:29.653653 1311248 cri.go:89] found id: ""
	I1218 00:37:29.653669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.653675 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:29.653681 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:29.653754 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:29.687572 1311248 cri.go:89] found id: ""
	I1218 00:37:29.687586 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.687593 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:29.687599 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:29.687670 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:29.725789 1311248 cri.go:89] found id: ""
	I1218 00:37:29.725803 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.725811 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:29.725816 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:29.725878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:29.753212 1311248 cri.go:89] found id: ""
	I1218 00:37:29.753226 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.753233 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:29.753241 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:29.753253 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:29.810976 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:29.810996 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:29.825952 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:29.825969 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:29.893717 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:29.893736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:29.893748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:29.959773 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:29.959794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:32.492460 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:32.502745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:32.502807 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:32.528416 1311248 cri.go:89] found id: ""
	I1218 00:37:32.528431 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.528438 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:32.528443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:32.528501 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:32.553770 1311248 cri.go:89] found id: ""
	I1218 00:37:32.553785 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.553792 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:32.553798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:32.553861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:32.577941 1311248 cri.go:89] found id: ""
	I1218 00:37:32.577956 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.577963 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:32.577969 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:32.578028 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:32.604043 1311248 cri.go:89] found id: ""
	I1218 00:37:32.604058 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.604075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:32.604081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:32.604159 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:32.629080 1311248 cri.go:89] found id: ""
	I1218 00:37:32.629095 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.629102 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:32.629108 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:32.629167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:32.664156 1311248 cri.go:89] found id: ""
	I1218 00:37:32.664171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.664187 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:32.664193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:32.664281 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:32.692107 1311248 cri.go:89] found id: ""
	I1218 00:37:32.692141 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.692149 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:32.692158 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:32.692168 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:32.758211 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:32.758238 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:32.774028 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:32.774047 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:32.839724 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:32.839734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:32.839749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:32.905609 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:32.905633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:35.434204 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:35.445035 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:35.445099 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:35.470531 1311248 cri.go:89] found id: ""
	I1218 00:37:35.470545 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.470553 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:35.470558 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:35.470621 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:35.494976 1311248 cri.go:89] found id: ""
	I1218 00:37:35.494990 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.494996 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:35.495001 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:35.495063 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:35.519629 1311248 cri.go:89] found id: ""
	I1218 00:37:35.519644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.519651 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:35.519656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:35.519714 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:35.544438 1311248 cri.go:89] found id: ""
	I1218 00:37:35.544453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.544460 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:35.544465 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:35.544523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:35.569684 1311248 cri.go:89] found id: ""
	I1218 00:37:35.569699 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.569706 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:35.569712 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:35.569771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:35.595541 1311248 cri.go:89] found id: ""
	I1218 00:37:35.595556 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.595563 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:35.595568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:35.595632 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:35.620307 1311248 cri.go:89] found id: ""
	I1218 00:37:35.620321 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.620328 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:35.620336 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:35.620346 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:35.678927 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:35.678945 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:35.697469 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:35.697488 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:35.774692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:35.774703 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:35.774713 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:35.836772 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:35.836792 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:38.369786 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:38.380243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:38.380304 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:38.406412 1311248 cri.go:89] found id: ""
	I1218 00:37:38.406426 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.406433 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:38.406439 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:38.406497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:38.431433 1311248 cri.go:89] found id: ""
	I1218 00:37:38.431447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.431454 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:38.431460 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:38.431518 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:38.455854 1311248 cri.go:89] found id: ""
	I1218 00:37:38.455869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.455876 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:38.455881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:38.455943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:38.480414 1311248 cri.go:89] found id: ""
	I1218 00:37:38.480428 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.480435 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:38.480440 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:38.480497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:38.506521 1311248 cri.go:89] found id: ""
	I1218 00:37:38.506535 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.506551 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:38.506557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:38.506630 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:38.531738 1311248 cri.go:89] found id: ""
	I1218 00:37:38.531762 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.531769 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:38.531774 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:38.531840 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:38.557054 1311248 cri.go:89] found id: ""
	I1218 00:37:38.557068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.557075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:38.557083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:38.557092 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:38.613102 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:38.613120 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:38.627653 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:38.627670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:38.723568 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:38.723579 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:38.723591 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:38.784988 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:38.785008 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:41.315880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:41.326378 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:41.326457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:41.351366 1311248 cri.go:89] found id: ""
	I1218 00:37:41.351381 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.351390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:41.351395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:41.351454 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:41.376110 1311248 cri.go:89] found id: ""
	I1218 00:37:41.376124 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.376131 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:41.376137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:41.376192 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:41.401062 1311248 cri.go:89] found id: ""
	I1218 00:37:41.401075 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.401082 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:41.401087 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:41.401146 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:41.425454 1311248 cri.go:89] found id: ""
	I1218 00:37:41.425469 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.425475 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:41.425481 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:41.425539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:41.454711 1311248 cri.go:89] found id: ""
	I1218 00:37:41.454724 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.454732 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:41.454737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:41.454799 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:41.479667 1311248 cri.go:89] found id: ""
	I1218 00:37:41.479681 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.479688 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:41.479694 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:41.479752 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:41.504248 1311248 cri.go:89] found id: ""
	I1218 00:37:41.504261 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.504268 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:41.504276 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:41.504323 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:41.559589 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:41.559609 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:41.574018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:41.574034 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:41.637175 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:41.637186 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:41.637196 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:41.712099 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:41.712122 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.243063 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:44.253213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:44.253272 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:44.278124 1311248 cri.go:89] found id: ""
	I1218 00:37:44.278138 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.278145 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:44.278150 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:44.278211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:44.302729 1311248 cri.go:89] found id: ""
	I1218 00:37:44.302743 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.302750 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:44.302755 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:44.302813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:44.327369 1311248 cri.go:89] found id: ""
	I1218 00:37:44.327384 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.327391 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:44.327396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:44.327458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:44.351769 1311248 cri.go:89] found id: ""
	I1218 00:37:44.351784 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.351791 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:44.351796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:44.351858 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:44.378488 1311248 cri.go:89] found id: ""
	I1218 00:37:44.378502 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.378509 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:44.378514 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:44.378574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:44.404134 1311248 cri.go:89] found id: ""
	I1218 00:37:44.404149 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.404156 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:44.404161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:44.404219 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:44.428529 1311248 cri.go:89] found id: ""
	I1218 00:37:44.428543 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.428551 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:44.428559 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:44.428570 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:44.443196 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:44.443212 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:44.505692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:44.505702 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:44.505712 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:44.571665 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:44.571686 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.600535 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:44.600553 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.157844 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:47.168414 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:47.168474 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:47.197971 1311248 cri.go:89] found id: ""
	I1218 00:37:47.197985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.197992 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:47.197997 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:47.198054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:47.223237 1311248 cri.go:89] found id: ""
	I1218 00:37:47.223251 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.223258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:47.223263 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:47.223322 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:47.251998 1311248 cri.go:89] found id: ""
	I1218 00:37:47.252018 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.252025 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:47.252031 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:47.252089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:47.275741 1311248 cri.go:89] found id: ""
	I1218 00:37:47.275755 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.275764 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:47.275769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:47.275826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:47.302583 1311248 cri.go:89] found id: ""
	I1218 00:37:47.302597 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.302604 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:47.302609 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:47.302665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:47.327501 1311248 cri.go:89] found id: ""
	I1218 00:37:47.327516 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.327523 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:47.327528 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:47.327594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:47.352433 1311248 cri.go:89] found id: ""
	I1218 00:37:47.352447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.352454 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:47.352463 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:47.352473 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.410340 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:47.410362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:47.425365 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:47.425388 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:47.492532 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:47.492542 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:47.492562 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:47.553805 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:47.553828 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.086246 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:50.097136 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:50.097206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:50.124671 1311248 cri.go:89] found id: ""
	I1218 00:37:50.124685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.124693 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:50.124698 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:50.124766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:50.150439 1311248 cri.go:89] found id: ""
	I1218 00:37:50.150453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.150460 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:50.150464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:50.150523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:50.174899 1311248 cri.go:89] found id: ""
	I1218 00:37:50.174913 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.174921 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:50.174926 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:50.174992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:50.200398 1311248 cri.go:89] found id: ""
	I1218 00:37:50.200412 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.200420 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:50.200425 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:50.200486 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:50.226325 1311248 cri.go:89] found id: ""
	I1218 00:37:50.226338 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.226345 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:50.226350 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:50.226409 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:50.251194 1311248 cri.go:89] found id: ""
	I1218 00:37:50.251208 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.251215 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:50.251220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:50.251287 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:50.278029 1311248 cri.go:89] found id: ""
	I1218 00:37:50.278043 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.278050 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:50.278057 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:50.278067 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:50.338421 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:50.338443 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.368542 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:50.368565 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:50.423715 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:50.423734 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:50.438292 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:50.438308 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:50.499550 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:52.999811 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:53.011389 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:53.011453 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:53.036842 1311248 cri.go:89] found id: ""
	I1218 00:37:53.036861 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.036869 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:53.036884 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:53.036981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:53.069368 1311248 cri.go:89] found id: ""
	I1218 00:37:53.069383 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.069391 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:53.069397 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:53.069458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:53.093990 1311248 cri.go:89] found id: ""
	I1218 00:37:53.094004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.094011 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:53.094016 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:53.094076 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:53.119386 1311248 cri.go:89] found id: ""
	I1218 00:37:53.119400 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.119417 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:53.119423 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:53.119487 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:53.144979 1311248 cri.go:89] found id: ""
	I1218 00:37:53.144992 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.144999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:53.145005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:53.145062 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:53.171485 1311248 cri.go:89] found id: ""
	I1218 00:37:53.171499 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.171506 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:53.171512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:53.171570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:53.198517 1311248 cri.go:89] found id: ""
	I1218 00:37:53.198530 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.198537 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:53.198545 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:53.198556 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:53.225701 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:53.225719 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:53.280281 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:53.280300 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:53.295217 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:53.295235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:53.360920 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:53.360930 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:53.360940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:55.923673 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:55.935823 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:55.935880 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:55.963196 1311248 cri.go:89] found id: ""
	I1218 00:37:55.963210 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.963217 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:55.963222 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:55.963278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:55.992688 1311248 cri.go:89] found id: ""
	I1218 00:37:55.992701 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.992708 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:55.992713 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:55.992778 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:56.032683 1311248 cri.go:89] found id: ""
	I1218 00:37:56.032696 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.032705 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:56.032711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:56.032779 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:56.061554 1311248 cri.go:89] found id: ""
	I1218 00:37:56.061568 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.061575 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:56.061580 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:56.061639 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:56.090855 1311248 cri.go:89] found id: ""
	I1218 00:37:56.090869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.090877 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:56.090882 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:56.090943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:56.115990 1311248 cri.go:89] found id: ""
	I1218 00:37:56.116004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.116020 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:56.116026 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:56.116085 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:56.141361 1311248 cri.go:89] found id: ""
	I1218 00:37:56.141385 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.141393 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:56.141401 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:56.141412 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:56.202998 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:56.203008 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:56.203019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:56.263974 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:56.263994 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:56.295494 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:56.295509 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:56.350431 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:56.350450 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:58.867454 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:58.877799 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:58.877861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:58.929615 1311248 cri.go:89] found id: ""
	I1218 00:37:58.929629 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.929636 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:58.929642 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:58.929701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:58.958880 1311248 cri.go:89] found id: ""
	I1218 00:37:58.958894 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.958900 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:58.958906 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:58.958965 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:58.983460 1311248 cri.go:89] found id: ""
	I1218 00:37:58.983475 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.983482 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:58.983487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:58.983547 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:59.009476 1311248 cri.go:89] found id: ""
	I1218 00:37:59.009490 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.009497 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:59.009503 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:59.009563 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:59.033436 1311248 cri.go:89] found id: ""
	I1218 00:37:59.033450 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.033457 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:59.033462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:59.033522 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:59.058635 1311248 cri.go:89] found id: ""
	I1218 00:37:59.058649 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.058656 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:59.058661 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:59.058719 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:59.082644 1311248 cri.go:89] found id: ""
	I1218 00:37:59.082658 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.082666 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:59.082673 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:59.082684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:59.138067 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:59.138085 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:59.154868 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:59.154884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:59.232032 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:59.232043 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:59.232061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:59.297264 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:59.297288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:01.827672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:01.838270 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:01.838330 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:01.862836 1311248 cri.go:89] found id: ""
	I1218 00:38:01.862855 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.862862 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:01.862867 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:01.862925 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:01.892782 1311248 cri.go:89] found id: ""
	I1218 00:38:01.892797 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.892804 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:01.892810 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:01.892876 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:01.919043 1311248 cri.go:89] found id: ""
	I1218 00:38:01.919068 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.919076 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:01.919081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:01.919148 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:01.945252 1311248 cri.go:89] found id: ""
	I1218 00:38:01.945267 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.945285 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:01.945291 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:01.945368 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:01.974338 1311248 cri.go:89] found id: ""
	I1218 00:38:01.974353 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.974361 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:01.974366 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:01.974433 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:02.003307 1311248 cri.go:89] found id: ""
	I1218 00:38:02.003324 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.003332 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:02.003339 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:02.003423 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:02.030938 1311248 cri.go:89] found id: ""
	I1218 00:38:02.030953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.030960 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:02.030968 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:02.030979 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:02.100511 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:02.100521 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:02.100531 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:02.162112 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:02.162132 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:02.191957 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:02.191976 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:02.248095 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:02.248116 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:04.765008 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:04.775100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:04.775168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:04.799097 1311248 cri.go:89] found id: ""
	I1218 00:38:04.799125 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.799132 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:04.799137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:04.799206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:04.826968 1311248 cri.go:89] found id: ""
	I1218 00:38:04.826993 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.827000 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:04.827005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:04.827083 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:04.860005 1311248 cri.go:89] found id: ""
	I1218 00:38:04.860020 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.860027 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:04.860032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:04.860103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:04.886293 1311248 cri.go:89] found id: ""
	I1218 00:38:04.886307 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.886315 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:04.886320 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:04.886385 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:04.918579 1311248 cri.go:89] found id: ""
	I1218 00:38:04.918594 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.918601 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:04.918607 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:04.918676 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:04.945152 1311248 cri.go:89] found id: ""
	I1218 00:38:04.945167 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.945183 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:04.945189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:04.945258 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:04.976410 1311248 cri.go:89] found id: ""
	I1218 00:38:04.976424 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.976432 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:04.976439 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:04.976449 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:05.032080 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:05.032100 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:05.047379 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:05.047396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:05.113965 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:05.113975 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:05.113986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:05.174878 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:05.174897 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:07.706926 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:07.717077 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:07.717140 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:07.741430 1311248 cri.go:89] found id: ""
	I1218 00:38:07.741464 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.741471 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:07.741477 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:07.741538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:07.766770 1311248 cri.go:89] found id: ""
	I1218 00:38:07.766784 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.766791 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:07.766796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:07.766855 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:07.790902 1311248 cri.go:89] found id: ""
	I1218 00:38:07.790917 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.790924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:07.790929 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:07.791005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:07.819681 1311248 cri.go:89] found id: ""
	I1218 00:38:07.819696 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.819703 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:07.819708 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:07.819770 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:07.844498 1311248 cri.go:89] found id: ""
	I1218 00:38:07.844512 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.844519 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:07.844524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:07.844584 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:07.870028 1311248 cri.go:89] found id: ""
	I1218 00:38:07.870043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.870050 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:07.870057 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:07.870125 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:07.906969 1311248 cri.go:89] found id: ""
	I1218 00:38:07.906984 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.906999 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:07.907007 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:07.907017 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:07.974278 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:07.974306 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:07.989533 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:07.989551 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:08.055867 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:08.055877 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:08.055889 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:08.118669 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:08.118693 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:10.651292 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:10.663394 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:10.663471 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:10.687520 1311248 cri.go:89] found id: ""
	I1218 00:38:10.687534 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.687542 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:10.687547 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:10.687608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:10.713147 1311248 cri.go:89] found id: ""
	I1218 00:38:10.713161 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.713168 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:10.713173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:10.713231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:10.737926 1311248 cri.go:89] found id: ""
	I1218 00:38:10.737940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.737948 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:10.737953 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:10.738012 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:10.763422 1311248 cri.go:89] found id: ""
	I1218 00:38:10.763436 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.763443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:10.763449 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:10.763508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:10.788619 1311248 cri.go:89] found id: ""
	I1218 00:38:10.788659 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.788672 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:10.788677 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:10.788738 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:10.813718 1311248 cri.go:89] found id: ""
	I1218 00:38:10.813732 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.813740 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:10.813745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:10.813803 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:10.837575 1311248 cri.go:89] found id: ""
	I1218 00:38:10.837588 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.837595 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:10.837603 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:10.837614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:10.852133 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:10.852149 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:10.917780 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:10.917791 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:10.917801 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:10.987674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:10.987695 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:11.024530 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:11.024549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.581947 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:13.592491 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:13.592556 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:13.617579 1311248 cri.go:89] found id: ""
	I1218 00:38:13.617593 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.617600 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:13.617605 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:13.617665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:13.641975 1311248 cri.go:89] found id: ""
	I1218 00:38:13.641990 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.641997 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:13.642002 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:13.642060 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:13.667128 1311248 cri.go:89] found id: ""
	I1218 00:38:13.667142 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.667149 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:13.667154 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:13.667215 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:13.699564 1311248 cri.go:89] found id: ""
	I1218 00:38:13.699579 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.699586 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:13.699591 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:13.699655 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:13.727620 1311248 cri.go:89] found id: ""
	I1218 00:38:13.727634 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.727641 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:13.727646 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:13.727703 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:13.756118 1311248 cri.go:89] found id: ""
	I1218 00:38:13.756132 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.756138 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:13.756144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:13.756204 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:13.780706 1311248 cri.go:89] found id: ""
	I1218 00:38:13.780720 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.780728 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:13.780736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:13.780746 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:13.842845 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:13.842864 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:13.871826 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:13.871843 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.932300 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:13.932319 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:13.950089 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:13.950106 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:14.022114 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.522391 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:16.534271 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:16.534357 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:16.558729 1311248 cri.go:89] found id: ""
	I1218 00:38:16.558743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.558757 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:16.558762 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:16.558819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:16.587758 1311248 cri.go:89] found id: ""
	I1218 00:38:16.587772 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.587779 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:16.587784 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:16.587841 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:16.612793 1311248 cri.go:89] found id: ""
	I1218 00:38:16.612807 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.612814 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:16.612819 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:16.612907 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:16.637417 1311248 cri.go:89] found id: ""
	I1218 00:38:16.637431 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.637438 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:16.637443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:16.637508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:16.662059 1311248 cri.go:89] found id: ""
	I1218 00:38:16.662073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.662080 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:16.662085 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:16.662141 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:16.686710 1311248 cri.go:89] found id: ""
	I1218 00:38:16.686724 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.686731 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:16.686737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:16.686794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:16.711539 1311248 cri.go:89] found id: ""
	I1218 00:38:16.711553 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.711561 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:16.711569 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:16.711579 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:16.739136 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:16.739151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:16.794672 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:16.794694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:16.809147 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:16.809171 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:16.878702 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.878711 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:16.878723 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.444575 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:19.454827 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:19.454887 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:19.482057 1311248 cri.go:89] found id: ""
	I1218 00:38:19.482071 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.482078 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:19.482083 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:19.482142 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:19.505124 1311248 cri.go:89] found id: ""
	I1218 00:38:19.505138 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.505146 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:19.505151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:19.505209 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:19.530010 1311248 cri.go:89] found id: ""
	I1218 00:38:19.530024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.530031 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:19.530037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:19.530094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:19.555994 1311248 cri.go:89] found id: ""
	I1218 00:38:19.556008 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.556025 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:19.556030 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:19.556087 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:19.580515 1311248 cri.go:89] found id: ""
	I1218 00:38:19.580539 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.580546 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:19.580554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:19.580619 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:19.605333 1311248 cri.go:89] found id: ""
	I1218 00:38:19.605348 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.605354 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:19.605360 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:19.605418 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:19.630483 1311248 cri.go:89] found id: ""
	I1218 00:38:19.630497 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.630504 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:19.630512 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:19.630522 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:19.693128 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:19.693138 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:19.693148 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.755570 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:19.755590 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:19.785139 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:19.785156 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:19.842579 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:19.842605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.358338 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:22.368724 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:22.368793 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:22.392394 1311248 cri.go:89] found id: ""
	I1218 00:38:22.392408 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.392415 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:22.392420 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:22.392478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:22.419029 1311248 cri.go:89] found id: ""
	I1218 00:38:22.419043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.419050 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:22.419055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:22.419117 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:22.443838 1311248 cri.go:89] found id: ""
	I1218 00:38:22.443852 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.443859 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:22.443864 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:22.443923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:22.467780 1311248 cri.go:89] found id: ""
	I1218 00:38:22.467794 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.467801 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:22.467807 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:22.467864 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:22.497254 1311248 cri.go:89] found id: ""
	I1218 00:38:22.497268 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.497276 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:22.497281 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:22.497340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:22.521672 1311248 cri.go:89] found id: ""
	I1218 00:38:22.521686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.521693 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:22.521699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:22.521758 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:22.548085 1311248 cri.go:89] found id: ""
	I1218 00:38:22.548119 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.548126 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:22.548134 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:22.548144 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:22.614828 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:22.614852 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:22.643447 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:22.643462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:22.698947 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:22.698967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.713971 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:22.713986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:22.789955 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.290158 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:25.300164 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:25.300226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:25.323897 1311248 cri.go:89] found id: ""
	I1218 00:38:25.323912 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.323919 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:25.323924 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:25.323985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:25.352232 1311248 cri.go:89] found id: ""
	I1218 00:38:25.352245 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.352252 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:25.352257 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:25.352314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:25.376749 1311248 cri.go:89] found id: ""
	I1218 00:38:25.376785 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.376792 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:25.376797 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:25.376868 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:25.401002 1311248 cri.go:89] found id: ""
	I1218 00:38:25.401015 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.401023 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:25.401028 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:25.401089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:25.426497 1311248 cri.go:89] found id: ""
	I1218 00:38:25.426510 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.426517 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:25.426522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:25.426579 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:25.450505 1311248 cri.go:89] found id: ""
	I1218 00:38:25.450518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.450525 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:25.450536 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:25.450593 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:25.478999 1311248 cri.go:89] found id: ""
	I1218 00:38:25.479013 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.479029 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:25.479037 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:25.479048 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:25.540968 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.540977 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:25.540987 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:25.601527 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:25.601546 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:25.633804 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:25.633826 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:25.691056 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:25.691076 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.206639 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:28.217134 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:28.217198 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:28.242357 1311248 cri.go:89] found id: ""
	I1218 00:38:28.242372 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.242378 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:28.242384 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:28.242449 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:28.271155 1311248 cri.go:89] found id: ""
	I1218 00:38:28.271169 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.271176 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:28.271181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:28.271242 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:28.296330 1311248 cri.go:89] found id: ""
	I1218 00:38:28.296345 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.296352 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:28.296357 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:28.296413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:28.320425 1311248 cri.go:89] found id: ""
	I1218 00:38:28.320449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.320456 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:28.320461 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:28.320528 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:28.345590 1311248 cri.go:89] found id: ""
	I1218 00:38:28.345603 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.345610 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:28.345625 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:28.345688 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:28.374296 1311248 cri.go:89] found id: ""
	I1218 00:38:28.374310 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.374334 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:28.374340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:28.374407 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:28.397991 1311248 cri.go:89] found id: ""
	I1218 00:38:28.398006 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.398014 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:28.398023 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:28.398033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:28.453794 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:28.453812 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.468531 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:28.468547 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:28.536754 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:28.536784 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:28.536796 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:28.599155 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:28.599174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:31.143176 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:31.156254 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:31.156313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:31.185437 1311248 cri.go:89] found id: ""
	I1218 00:38:31.185452 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.185460 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:31.185472 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:31.185531 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:31.215130 1311248 cri.go:89] found id: ""
	I1218 00:38:31.215144 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.215153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:31.215157 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:31.215217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:31.240144 1311248 cri.go:89] found id: ""
	I1218 00:38:31.240157 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.240164 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:31.240169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:31.240227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:31.265058 1311248 cri.go:89] found id: ""
	I1218 00:38:31.265072 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.265079 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:31.265084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:31.265150 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:31.289354 1311248 cri.go:89] found id: ""
	I1218 00:38:31.289368 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.289375 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:31.289380 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:31.289438 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:31.319744 1311248 cri.go:89] found id: ""
	I1218 00:38:31.319758 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.319766 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:31.319771 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:31.319826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:31.343739 1311248 cri.go:89] found id: ""
	I1218 00:38:31.343753 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.343760 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:31.343768 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:31.343778 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:31.399267 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:31.399287 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:31.413578 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:31.413595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:31.478705 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:31.478714 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:31.478724 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:31.540680 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:31.540703 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.068816 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:34.079525 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:34.079589 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:34.106415 1311248 cri.go:89] found id: ""
	I1218 00:38:34.106432 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.106440 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:34.106445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:34.106506 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:34.131181 1311248 cri.go:89] found id: ""
	I1218 00:38:34.131195 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.131202 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:34.131208 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:34.131265 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:34.166885 1311248 cri.go:89] found id: ""
	I1218 00:38:34.166898 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.166906 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:34.166911 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:34.166970 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:34.197771 1311248 cri.go:89] found id: ""
	I1218 00:38:34.197786 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.197793 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:34.197798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:34.197856 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:34.226531 1311248 cri.go:89] found id: ""
	I1218 00:38:34.226546 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.226552 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:34.226557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:34.226614 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:34.252100 1311248 cri.go:89] found id: ""
	I1218 00:38:34.252114 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.252121 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:34.252127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:34.252185 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:34.278653 1311248 cri.go:89] found id: ""
	I1218 00:38:34.278667 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.278675 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:34.278683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:34.278694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:34.293444 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:34.293463 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:34.359201 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:34.359211 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:34.359221 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:34.420750 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:34.420773 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.449621 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:34.449637 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.006206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:37.019401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:37.019472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:37.047646 1311248 cri.go:89] found id: ""
	I1218 00:38:37.047660 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.047667 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:37.047673 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:37.047733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:37.076612 1311248 cri.go:89] found id: ""
	I1218 00:38:37.076646 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.076653 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:37.076658 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:37.076717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:37.102368 1311248 cri.go:89] found id: ""
	I1218 00:38:37.102383 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.102390 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:37.102395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:37.102452 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:37.126829 1311248 cri.go:89] found id: ""
	I1218 00:38:37.126843 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.126850 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:37.126855 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:37.126913 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:37.159965 1311248 cri.go:89] found id: ""
	I1218 00:38:37.159980 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.159987 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:37.159992 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:37.160048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:37.193535 1311248 cri.go:89] found id: ""
	I1218 00:38:37.193549 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.193558 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:37.193564 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:37.193622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:37.224708 1311248 cri.go:89] found id: ""
	I1218 00:38:37.224723 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.224730 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:37.224738 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:37.224749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:37.287765 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:37.287775 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:37.287787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:37.349218 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:37.349239 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:37.377886 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:37.377902 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.435205 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:37.435224 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:39.950327 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:39.960885 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:39.960948 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:39.985573 1311248 cri.go:89] found id: ""
	I1218 00:38:39.985587 1311248 logs.go:282] 0 containers: []
	W1218 00:38:39.985596 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:39.985602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:39.985662 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:40.020843 1311248 cri.go:89] found id: ""
	I1218 00:38:40.020859 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.020867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:40.020873 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:40.020949 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:40.067991 1311248 cri.go:89] found id: ""
	I1218 00:38:40.068007 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.068015 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:40.068021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:40.068096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:40.097024 1311248 cri.go:89] found id: ""
	I1218 00:38:40.097039 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.097047 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:40.097053 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:40.097118 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:40.127502 1311248 cri.go:89] found id: ""
	I1218 00:38:40.127518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.127526 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:40.127531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:40.127595 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:40.165566 1311248 cri.go:89] found id: ""
	I1218 00:38:40.165580 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.165587 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:40.165593 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:40.165660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:40.204927 1311248 cri.go:89] found id: ""
	I1218 00:38:40.204940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.204948 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:40.204956 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:40.204967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:40.222297 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:40.222314 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:40.292382 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:40.292392 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:40.292403 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:40.353852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:40.353871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:40.385828 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:40.385844 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:42.942427 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:42.952937 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:42.952996 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:42.982184 1311248 cri.go:89] found id: ""
	I1218 00:38:42.982201 1311248 logs.go:282] 0 containers: []
	W1218 00:38:42.982208 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:42.982213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:42.982271 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:43.009928 1311248 cri.go:89] found id: ""
	I1218 00:38:43.009944 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.009952 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:43.009957 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:43.010021 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:43.036384 1311248 cri.go:89] found id: ""
	I1218 00:38:43.036397 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.036405 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:43.036410 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:43.036472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:43.061945 1311248 cri.go:89] found id: ""
	I1218 00:38:43.061959 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.061967 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:43.061972 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:43.062030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:43.087977 1311248 cri.go:89] found id: ""
	I1218 00:38:43.087992 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.087999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:43.088005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:43.088069 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:43.113297 1311248 cri.go:89] found id: ""
	I1218 00:38:43.113312 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.113319 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:43.113324 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:43.113390 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:43.148378 1311248 cri.go:89] found id: ""
	I1218 00:38:43.148392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.148399 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:43.148408 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:43.148419 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:43.218202 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:43.218227 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:43.234424 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:43.234441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:43.295849 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:43.295860 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:43.295871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:43.357903 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:43.357924 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:45.889646 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:45.899918 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:45.899981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:45.923610 1311248 cri.go:89] found id: ""
	I1218 00:38:45.923623 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.923630 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:45.923635 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:45.923696 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:45.949282 1311248 cri.go:89] found id: ""
	I1218 00:38:45.949296 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.949304 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:45.949309 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:45.949371 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:45.974071 1311248 cri.go:89] found id: ""
	I1218 00:38:45.974085 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.974092 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:45.974097 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:45.974153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:45.997865 1311248 cri.go:89] found id: ""
	I1218 00:38:45.997880 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.997887 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:45.997892 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:45.997953 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:46.026399 1311248 cri.go:89] found id: ""
	I1218 00:38:46.026413 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.026426 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:46.026432 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:46.026490 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:46.060011 1311248 cri.go:89] found id: ""
	I1218 00:38:46.060026 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.060033 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:46.060038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:46.060097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:46.095378 1311248 cri.go:89] found id: ""
	I1218 00:38:46.095392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.095398 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:46.095407 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:46.095418 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:46.110828 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:46.110845 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:46.194637 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:46.194647 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:46.194657 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:46.265968 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:46.265989 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:46.298428 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:46.298444 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:48.855794 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:48.868391 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:48.868457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:48.898010 1311248 cri.go:89] found id: ""
	I1218 00:38:48.898024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.898032 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:48.898037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:48.898097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:48.926962 1311248 cri.go:89] found id: ""
	I1218 00:38:48.926976 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.926984 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:48.926989 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:48.927046 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:48.953073 1311248 cri.go:89] found id: ""
	I1218 00:38:48.953096 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.953104 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:48.953109 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:48.953171 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:48.978527 1311248 cri.go:89] found id: ""
	I1218 00:38:48.978542 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.978548 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:48.978554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:48.978611 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:49.005774 1311248 cri.go:89] found id: ""
	I1218 00:38:49.005791 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.005800 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:49.005805 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:49.005881 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:49.032714 1311248 cri.go:89] found id: ""
	I1218 00:38:49.032743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.032751 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:49.032756 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:49.032845 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:49.058437 1311248 cri.go:89] found id: ""
	I1218 00:38:49.058451 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.058459 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:49.058468 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:49.058478 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:49.114793 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:49.114813 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:49.129898 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:49.129916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:49.218168 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:49.218179 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:49.218190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:49.289574 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:49.289595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:51.822637 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:51.833100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:51.833161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:51.858494 1311248 cri.go:89] found id: ""
	I1218 00:38:51.858508 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.858515 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:51.858520 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:51.858609 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:51.883202 1311248 cri.go:89] found id: ""
	I1218 00:38:51.883217 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.883224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:51.883229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:51.883286 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:51.911732 1311248 cri.go:89] found id: ""
	I1218 00:38:51.911746 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.911753 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:51.911758 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:51.911813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:51.937059 1311248 cri.go:89] found id: ""
	I1218 00:38:51.937073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.937080 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:51.937086 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:51.937144 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:51.960983 1311248 cri.go:89] found id: ""
	I1218 00:38:51.960998 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.961016 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:51.961021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:51.961095 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:51.985889 1311248 cri.go:89] found id: ""
	I1218 00:38:51.985904 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.985911 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:51.985916 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:51.985976 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:52.012132 1311248 cri.go:89] found id: ""
	I1218 00:38:52.012147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:52.012155 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:52.012163 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:52.012174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:52.080718 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:52.080736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:52.080748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:52.144427 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:52.144446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:52.176847 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:52.176869 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:52.239307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:52.239325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:54.754340 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:54.764793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:54.764857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:54.794012 1311248 cri.go:89] found id: ""
	I1218 00:38:54.794027 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.794034 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:54.794039 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:54.794096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:54.823133 1311248 cri.go:89] found id: ""
	I1218 00:38:54.823147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.823155 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:54.823160 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:54.823216 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:54.847977 1311248 cri.go:89] found id: ""
	I1218 00:38:54.847991 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.847998 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:54.848003 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:54.848064 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:54.873449 1311248 cri.go:89] found id: ""
	I1218 00:38:54.873462 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.873469 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:54.873475 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:54.873532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:54.897891 1311248 cri.go:89] found id: ""
	I1218 00:38:54.897905 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.897922 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:54.897928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:54.897985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:54.922432 1311248 cri.go:89] found id: ""
	I1218 00:38:54.922449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.922456 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:54.922462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:54.922520 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:54.947869 1311248 cri.go:89] found id: ""
	I1218 00:38:54.947884 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.947908 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:54.947916 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:54.947927 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:55.005409 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:55.005434 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:55.026491 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:55.026508 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:55.094641 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:55.094652 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:55.094663 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:55.159462 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:55.159481 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.695023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:57.706079 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:57.706147 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:57.735083 1311248 cri.go:89] found id: ""
	I1218 00:38:57.735106 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.735114 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:57.735119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:57.735178 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:57.762228 1311248 cri.go:89] found id: ""
	I1218 00:38:57.762242 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.762249 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:57.762255 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:57.762313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:57.787211 1311248 cri.go:89] found id: ""
	I1218 00:38:57.787226 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.787233 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:57.787238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:57.787303 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:57.812671 1311248 cri.go:89] found id: ""
	I1218 00:38:57.812686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.812693 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:57.812699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:57.812762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:57.840939 1311248 cri.go:89] found id: ""
	I1218 00:38:57.840953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.840961 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:57.840966 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:57.841031 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:57.867148 1311248 cri.go:89] found id: ""
	I1218 00:38:57.867163 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.867170 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:57.867175 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:57.867232 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:57.891633 1311248 cri.go:89] found id: ""
	I1218 00:38:57.891648 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.891665 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:57.891674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:57.891684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.918896 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:57.918913 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:57.975605 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:57.975625 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:57.990660 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:57.990676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:58.063038 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:58.063048 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:58.063061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.627359 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:00.638675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:00.638768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:00.669731 1311248 cri.go:89] found id: ""
	I1218 00:39:00.669745 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.669752 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:00.669757 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:00.669824 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:00.697124 1311248 cri.go:89] found id: ""
	I1218 00:39:00.697138 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.697145 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:00.697151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:00.697211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:00.722455 1311248 cri.go:89] found id: ""
	I1218 00:39:00.722469 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.722476 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:00.722486 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:00.722545 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:00.750996 1311248 cri.go:89] found id: ""
	I1218 00:39:00.751010 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.751018 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:00.751023 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:00.751091 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:00.780012 1311248 cri.go:89] found id: ""
	I1218 00:39:00.780026 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.780033 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:00.780038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:00.780105 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:00.807119 1311248 cri.go:89] found id: ""
	I1218 00:39:00.807133 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.807140 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:00.807145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:00.807213 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:00.836658 1311248 cri.go:89] found id: ""
	I1218 00:39:00.836673 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.836681 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:00.836689 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:00.836699 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:00.851616 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:00.851633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:00.919909 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:00.919918 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:00.919929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.985802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:00.985823 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:01.017691 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:01.017707 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.574413 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:03.585024 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:03.585088 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:03.615721 1311248 cri.go:89] found id: ""
	I1218 00:39:03.615735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.615742 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:03.615748 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:03.615811 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:03.641216 1311248 cri.go:89] found id: ""
	I1218 00:39:03.641230 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.641237 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:03.641243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:03.641307 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:03.665604 1311248 cri.go:89] found id: ""
	I1218 00:39:03.665618 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.665625 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:03.665639 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:03.665717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:03.690936 1311248 cri.go:89] found id: ""
	I1218 00:39:03.690951 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.690958 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:03.690970 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:03.691030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:03.716763 1311248 cri.go:89] found id: ""
	I1218 00:39:03.716794 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.716806 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:03.716811 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:03.716898 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:03.742156 1311248 cri.go:89] found id: ""
	I1218 00:39:03.742170 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.742177 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:03.742183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:03.742240 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:03.771205 1311248 cri.go:89] found id: ""
	I1218 00:39:03.771220 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.771227 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:03.771235 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:03.771245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:03.834106 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:03.834127 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:03.863112 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:03.863129 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.919444 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:03.919465 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:03.934588 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:03.934607 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:04.000293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.500788 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:06.511530 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:06.511596 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:06.536538 1311248 cri.go:89] found id: ""
	I1218 00:39:06.536554 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.536562 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:06.536568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:06.536651 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:06.565199 1311248 cri.go:89] found id: ""
	I1218 00:39:06.565213 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.565219 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:06.565224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:06.565283 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:06.589614 1311248 cri.go:89] found id: ""
	I1218 00:39:06.589628 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.589636 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:06.589641 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:06.589700 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:06.614004 1311248 cri.go:89] found id: ""
	I1218 00:39:06.614019 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.614027 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:06.614032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:06.614093 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:06.638819 1311248 cri.go:89] found id: ""
	I1218 00:39:06.638833 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.638841 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:06.638846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:06.638908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:06.666620 1311248 cri.go:89] found id: ""
	I1218 00:39:06.666634 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.666643 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:06.666648 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:06.666707 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:06.694192 1311248 cri.go:89] found id: ""
	I1218 00:39:06.694207 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.694216 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:06.694224 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:06.694235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:06.709318 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:06.709336 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:06.773553 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.773564 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:06.773587 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:06.842917 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:06.842937 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:06.877280 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:06.877296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.433923 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:09.445181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:09.445248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:09.470100 1311248 cri.go:89] found id: ""
	I1218 00:39:09.470115 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.470122 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:09.470127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:09.470184 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:09.499949 1311248 cri.go:89] found id: ""
	I1218 00:39:09.499964 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.499973 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:09.499978 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:09.500044 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:09.526313 1311248 cri.go:89] found id: ""
	I1218 00:39:09.526328 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.526335 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:09.526340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:09.526404 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:09.551831 1311248 cri.go:89] found id: ""
	I1218 00:39:09.551844 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.551851 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:09.551857 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:09.551923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:09.577535 1311248 cri.go:89] found id: ""
	I1218 00:39:09.577549 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.577557 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:09.577561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:09.577622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:09.602570 1311248 cri.go:89] found id: ""
	I1218 00:39:09.602584 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.602591 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:09.602597 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:09.602658 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:09.630715 1311248 cri.go:89] found id: ""
	I1218 00:39:09.630729 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.630736 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:09.630745 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:09.630755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.686840 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:09.686859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:09.703315 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:09.703331 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:09.770650 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:09.770660 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:09.770670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:09.832439 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:09.832457 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:12.361961 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:12.372127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:12.372190 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:12.408061 1311248 cri.go:89] found id: ""
	I1218 00:39:12.408075 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.408082 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:12.408088 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:12.408145 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:12.434860 1311248 cri.go:89] found id: ""
	I1218 00:39:12.434874 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.434881 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:12.434886 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:12.434946 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:12.465255 1311248 cri.go:89] found id: ""
	I1218 00:39:12.465270 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.465278 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:12.465283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:12.465341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:12.494330 1311248 cri.go:89] found id: ""
	I1218 00:39:12.494344 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.494350 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:12.494356 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:12.494420 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:12.518885 1311248 cri.go:89] found id: ""
	I1218 00:39:12.518900 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.518907 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:12.518912 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:12.518973 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:12.543549 1311248 cri.go:89] found id: ""
	I1218 00:39:12.543564 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.543573 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:12.543578 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:12.543641 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:12.568469 1311248 cri.go:89] found id: ""
	I1218 00:39:12.568483 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.568500 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:12.568507 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:12.568519 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:12.624017 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:12.624039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:12.639011 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:12.639028 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:12.703723 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:12.703734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:12.703744 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:12.765331 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:12.765350 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.294913 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:15.308145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:15.308210 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:15.340203 1311248 cri.go:89] found id: ""
	I1218 00:39:15.340218 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.340225 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:15.340230 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:15.340289 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:15.367732 1311248 cri.go:89] found id: ""
	I1218 00:39:15.367747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.367754 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:15.367760 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:15.367818 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:15.398027 1311248 cri.go:89] found id: ""
	I1218 00:39:15.398042 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.398049 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:15.398055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:15.398115 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:15.430352 1311248 cri.go:89] found id: ""
	I1218 00:39:15.430366 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.430373 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:15.430379 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:15.430442 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:15.461268 1311248 cri.go:89] found id: ""
	I1218 00:39:15.461283 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.461291 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:15.461297 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:15.461361 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:15.487656 1311248 cri.go:89] found id: ""
	I1218 00:39:15.487671 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.487678 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:15.487684 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:15.487744 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:15.516835 1311248 cri.go:89] found id: ""
	I1218 00:39:15.516850 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.516858 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:15.516867 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:15.516877 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:15.584348 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:15.584357 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:15.584377 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:15.646829 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:15.646849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.675913 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:15.675929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:15.731421 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:15.731441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.246605 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:18.257277 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:18.257340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:18.282497 1311248 cri.go:89] found id: ""
	I1218 00:39:18.282512 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.282519 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:18.282527 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:18.282594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:18.317178 1311248 cri.go:89] found id: ""
	I1218 00:39:18.317193 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.317200 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:18.317205 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:18.317267 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:18.342018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.342032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.342039 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:18.342044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:18.342098 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:18.366018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.366032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.366040 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:18.366045 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:18.366107 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:18.390880 1311248 cri.go:89] found id: ""
	I1218 00:39:18.390894 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.390902 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:18.390908 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:18.390968 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:18.427152 1311248 cri.go:89] found id: ""
	I1218 00:39:18.427167 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.427174 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:18.427181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:18.427241 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:18.458481 1311248 cri.go:89] found id: ""
	I1218 00:39:18.458495 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.458502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:18.458510 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:18.458521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:18.486379 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:18.486397 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:18.546371 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:18.546396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.561410 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:18.561431 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:18.625094 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:18.625105 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:18.625118 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.187071 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:21.197777 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:21.197842 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:21.228457 1311248 cri.go:89] found id: ""
	I1218 00:39:21.228472 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.228479 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:21.228485 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:21.228551 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:21.254227 1311248 cri.go:89] found id: ""
	I1218 00:39:21.254240 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.254258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:21.254264 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:21.254321 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:21.283166 1311248 cri.go:89] found id: ""
	I1218 00:39:21.283180 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.283187 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:21.283193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:21.283259 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:21.307940 1311248 cri.go:89] found id: ""
	I1218 00:39:21.307954 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.307962 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:21.307967 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:21.308022 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:21.333576 1311248 cri.go:89] found id: ""
	I1218 00:39:21.333590 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.333597 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:21.333602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:21.333660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:21.357404 1311248 cri.go:89] found id: ""
	I1218 00:39:21.357418 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.357425 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:21.357430 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:21.357488 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:21.386789 1311248 cri.go:89] found id: ""
	I1218 00:39:21.386803 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.386811 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:21.386819 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:21.386830 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:21.467813 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:21.467824 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:21.467834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.529999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:21.530019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:21.561213 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:21.561228 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:21.619110 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:21.619128 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.133884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:24.144224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:24.144298 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:24.169895 1311248 cri.go:89] found id: ""
	I1218 00:39:24.169909 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.169916 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:24.169922 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:24.169981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:24.196376 1311248 cri.go:89] found id: ""
	I1218 00:39:24.196390 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.196396 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:24.196401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:24.196464 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:24.220959 1311248 cri.go:89] found id: ""
	I1218 00:39:24.220978 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.220986 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:24.220991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:24.221051 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:24.246721 1311248 cri.go:89] found id: ""
	I1218 00:39:24.246735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.246745 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:24.246751 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:24.246819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:24.271380 1311248 cri.go:89] found id: ""
	I1218 00:39:24.271394 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.271401 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:24.271406 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:24.271466 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:24.298631 1311248 cri.go:89] found id: ""
	I1218 00:39:24.298645 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.298652 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:24.298657 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:24.298713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:24.322933 1311248 cri.go:89] found id: ""
	I1218 00:39:24.322947 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.322965 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:24.322974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:24.322984 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:24.378307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:24.378325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.395279 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:24.395296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:24.478731 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:24.478740 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:24.478750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:24.539558 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:24.539578 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.069527 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:27.079511 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:27.079570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:27.104730 1311248 cri.go:89] found id: ""
	I1218 00:39:27.104747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.104754 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:27.104759 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:27.104826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:27.134528 1311248 cri.go:89] found id: ""
	I1218 00:39:27.134543 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.134551 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:27.134556 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:27.134618 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:27.160290 1311248 cri.go:89] found id: ""
	I1218 00:39:27.160304 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.160311 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:27.160316 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:27.160374 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:27.187607 1311248 cri.go:89] found id: ""
	I1218 00:39:27.187621 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.187628 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:27.187634 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:27.187691 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:27.214602 1311248 cri.go:89] found id: ""
	I1218 00:39:27.214616 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.214623 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:27.214630 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:27.214690 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:27.239452 1311248 cri.go:89] found id: ""
	I1218 00:39:27.239466 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.239474 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:27.239479 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:27.239538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:27.268209 1311248 cri.go:89] found id: ""
	I1218 00:39:27.268232 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.268240 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:27.268248 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:27.268259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:27.283007 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:27.283033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:27.351624 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:27.351634 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:27.351644 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:27.414794 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:27.414814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.449027 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:27.449042 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.008353 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:30.051512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:30.051599 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:30.142207 1311248 cri.go:89] found id: ""
	I1218 00:39:30.142226 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.142234 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:30.142241 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:30.142317 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:30.175952 1311248 cri.go:89] found id: ""
	I1218 00:39:30.175967 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.175979 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:30.175985 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:30.176054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:30.202613 1311248 cri.go:89] found id: ""
	I1218 00:39:30.202640 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.202649 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:30.202655 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:30.202718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:30.229638 1311248 cri.go:89] found id: ""
	I1218 00:39:30.229653 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.229661 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:30.229666 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:30.229728 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:30.261192 1311248 cri.go:89] found id: ""
	I1218 00:39:30.261206 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.261214 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:30.261220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:30.261285 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:30.288158 1311248 cri.go:89] found id: ""
	I1218 00:39:30.288173 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.288180 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:30.288189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:30.288251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:30.314418 1311248 cri.go:89] found id: ""
	I1218 00:39:30.314432 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.314441 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:30.314450 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:30.314462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.369830 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:30.369849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:30.385018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:30.385037 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:30.467908 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:30.467920 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:30.467930 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:30.529075 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:30.529095 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:33.059241 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:33.070119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:33.070182 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:33.095716 1311248 cri.go:89] found id: ""
	I1218 00:39:33.095730 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.095738 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:33.095744 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:33.095804 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:33.121681 1311248 cri.go:89] found id: ""
	I1218 00:39:33.121697 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.121711 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:33.121717 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:33.121783 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:33.147424 1311248 cri.go:89] found id: ""
	I1218 00:39:33.147438 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.147445 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:33.147451 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:33.147514 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:33.173916 1311248 cri.go:89] found id: ""
	I1218 00:39:33.173931 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.173938 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:33.173943 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:33.174004 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:33.199675 1311248 cri.go:89] found id: ""
	I1218 00:39:33.199690 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.199697 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:33.199702 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:33.199761 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:33.229684 1311248 cri.go:89] found id: ""
	I1218 00:39:33.229698 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.229706 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:33.229711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:33.229771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:33.255931 1311248 cri.go:89] found id: ""
	I1218 00:39:33.255955 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.255963 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:33.255971 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:33.255981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:33.312520 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:33.312538 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:33.327008 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:33.327024 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:33.392853 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:33.392863 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:33.392873 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:33.462852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:33.462872 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:35.991111 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:36.001578 1311248 kubeadm.go:602] duration metric: took 4m4.636770246s to restartPrimaryControlPlane
	W1218 00:39:36.001631 1311248 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1218 00:39:36.001712 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:39:36.428039 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:39:36.441875 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:39:36.449799 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:39:36.449855 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:39:36.457535 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:39:36.457543 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:39:36.457593 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:39:36.465339 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:39:36.465393 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:39:36.472406 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:39:36.480110 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:39:36.480163 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:39:36.487432 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.494964 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:39:36.495019 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.502375 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:39:36.509914 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:39:36.509976 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:39:36.517325 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:39:36.642706 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:39:36.643096 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:39:36.709498 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:43:38.241451 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 00:43:38.241477 1311248 kubeadm.go:319] 
	I1218 00:43:38.241546 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:43:38.245587 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.245639 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.245728 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.245779 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.245813 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.245856 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.245904 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.245947 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.246021 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.246074 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.246124 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.246169 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.246253 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.246316 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.246394 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.246489 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.246578 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.246661 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.249668 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.249761 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.249825 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.249900 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.249985 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.250056 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.250107 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.250167 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.250231 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.250306 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.250386 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.250429 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.250494 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:38.250547 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:38.250611 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:38.250669 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:38.250731 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:38.250784 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:38.250896 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:38.250969 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:38.255653 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:38.255752 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:38.255840 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:38.255905 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:38.256008 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:38.256128 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:38.256248 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:38.256329 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:38.256365 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:38.256499 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:38.256681 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:43:38.256752 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000096267s
	I1218 00:43:38.256755 1311248 kubeadm.go:319] 
	I1218 00:43:38.256814 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:43:38.256853 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:43:38.256963 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:43:38.256967 1311248 kubeadm.go:319] 
	I1218 00:43:38.257093 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:43:38.257126 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:43:38.257155 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:43:38.257212 1311248 kubeadm.go:319] 
	W1218 00:43:38.257278 1311248 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000096267s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 00:43:38.257393 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:43:38.672580 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:43:38.686195 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:43:38.686247 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:43:38.694107 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:43:38.694119 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:43:38.694170 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:43:38.702289 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:43:38.702343 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:43:38.710380 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:43:38.718160 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:43:38.718218 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:43:38.726244 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.734209 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:43:38.734268 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.741907 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:43:38.749716 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:43:38.749773 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:43:38.757471 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:43:38.797919 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.797966 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.877731 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.877795 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.877835 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.877879 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.877926 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.877972 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.878019 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.878065 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.878112 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.878155 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.878202 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.878247 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.941330 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.941446 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.941535 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.951935 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.957317 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.957410 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.957474 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.957580 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.957646 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.957723 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.957784 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.957852 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.957913 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.957987 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.958059 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.958095 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.958151 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:39.202920 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:39.377892 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:39.964483 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:40.103558 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:40.457630 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:40.458383 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:40.462089 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:40.465489 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:40.465583 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:40.465654 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:40.465716 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:40.486385 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:40.486497 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:40.494535 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:40.494848 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:40.495030 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:40.625355 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:40.625497 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:47:40.625149 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000298437s
	I1218 00:47:40.625174 1311248 kubeadm.go:319] 
	I1218 00:47:40.625227 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:47:40.625262 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:47:40.625362 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:47:40.625367 1311248 kubeadm.go:319] 
	I1218 00:47:40.625481 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:47:40.625513 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:47:40.625550 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:47:40.625553 1311248 kubeadm.go:319] 
	I1218 00:47:40.629455 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:47:40.629954 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:47:40.630083 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:47:40.630316 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 00:47:40.630321 1311248 kubeadm.go:319] 
	I1218 00:47:40.630384 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:47:40.630455 1311248 kubeadm.go:403] duration metric: took 12m9.299018648s to StartCluster
	I1218 00:47:40.630487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:47:40.630549 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:47:40.655474 1311248 cri.go:89] found id: ""
	I1218 00:47:40.655489 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.655497 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:47:40.655502 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:47:40.655558 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:47:40.681677 1311248 cri.go:89] found id: ""
	I1218 00:47:40.681692 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.681699 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:47:40.681705 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:47:40.681772 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:47:40.714293 1311248 cri.go:89] found id: ""
	I1218 00:47:40.714307 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.714314 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:47:40.714319 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:47:40.714379 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:47:40.739065 1311248 cri.go:89] found id: ""
	I1218 00:47:40.739089 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.739097 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:47:40.739102 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:47:40.739168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:47:40.763653 1311248 cri.go:89] found id: ""
	I1218 00:47:40.763666 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.763673 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:47:40.763678 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:47:40.763737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:47:40.789038 1311248 cri.go:89] found id: ""
	I1218 00:47:40.789052 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.789059 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:47:40.789065 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:47:40.789124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:47:40.817866 1311248 cri.go:89] found id: ""
	I1218 00:47:40.817880 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.817887 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:47:40.817895 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:47:40.817905 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:47:40.877071 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:47:40.877090 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:47:40.891818 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:47:40.891835 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:47:40.956585 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:47:40.956595 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:47:40.956605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:47:41.023372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:47:41.023390 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 00:47:41.051126 1311248 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 00:47:41.051157 1311248 out.go:285] * 
	W1218 00:47:41.051213 1311248 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.051229 1311248 out.go:285] * 
	W1218 00:47:41.053388 1311248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:47:41.058223 1311248 out.go:203] 
	W1218 00:47:41.061890 1311248 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.061936 1311248 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 00:47:41.061956 1311248 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 00:47:41.065091 1311248 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724217200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724234003Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724272616Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724290872Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724301153Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724312311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724321337Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724338510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724355125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724387017Z" level=info msg="Connect containerd service"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724787739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.725358196Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744687707Z" level=info msg="Start subscribing containerd event"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744774532Z" level=info msg="Start recovering state"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744732367Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.745188078Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.785773770Z" level=info msg="Start event monitor"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.785958718Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786026286Z" level=info msg="Start streaming server"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786098128Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786157901Z" level=info msg="runtime interface starting up..."
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786221604Z" level=info msg="starting plugins..."
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786283461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 00:35:29 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.788365819Z" level=info msg="containerd successfully booted in 0.084734s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:42.342843   20998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:42.343666   20998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:42.344919   20998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:42.345531   20998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:42.347210   20998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:47:42 up  7:30,  0 user,  load average: 0.62, 0.33, 0.47
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:47:38 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:39 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 18 00:47:39 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:39 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:39 functional-232602 kubelet[20804]: E1218 00:47:39.688509   20804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:39 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:39 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:40 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 18 00:47:40 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:40 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:40 functional-232602 kubelet[20810]: E1218 00:47:40.437448   20810 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:40 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:40 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 18 00:47:41 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:41 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:41 functional-232602 kubelet[20896]: E1218 00:47:41.205033   20896 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 00:47:41 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:41 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:41 functional-232602 kubelet[20919]: E1218 00:47:41.946199   20919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (363.012723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (736.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-232602 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-232602 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (59.949238ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-232602 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (324.912883ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-739047 image ls --format yaml --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ ssh     │ functional-739047 ssh pgrep buildkitd                                                                                                                 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ image   │ functional-739047 image ls --format json --alsologtostderr                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls --format table --alsologtostderr                                                                                           │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr                                                │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ image   │ functional-739047 image ls                                                                                                                            │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ delete  │ -p functional-739047                                                                                                                                  │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
	│ start   │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │                     │
	│ start   │ -p functional-232602 --alsologtostderr -v=8                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:29 UTC │                     │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.1                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:3.3                                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add registry.k8s.io/pause:latest                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache add minikube-local-cache-test:functional-232602                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ functional-232602 cache delete minikube-local-cache-test:functional-232602                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ list                                                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl images                                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-232602 cache reload                                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ kubectl │ functional-232602 kubectl -- --context functional-232602 get pods                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ start   │ -p functional-232602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:35:27.044902 1311248 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:35:27.045002 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045006 1311248 out.go:374] Setting ErrFile to fd 2...
	I1218 00:35:27.045010 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045249 1311248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:35:27.045606 1311248 out.go:368] Setting JSON to false
	I1218 00:35:27.046406 1311248 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26273,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:35:27.046458 1311248 start.go:143] virtualization:  
	I1218 00:35:27.049930 1311248 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:35:27.052925 1311248 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:35:27.053012 1311248 notify.go:221] Checking for updates...
	I1218 00:35:27.058856 1311248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:35:27.061872 1311248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:35:27.064792 1311248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:35:27.067743 1311248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:35:27.070676 1311248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:35:27.074096 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:27.074190 1311248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:35:27.106641 1311248 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:35:27.106748 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.164302 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.154715728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.164392 1311248 docker.go:319] overlay module found
	I1218 00:35:27.167427 1311248 out.go:179] * Using the docker driver based on existing profile
	I1218 00:35:27.170281 1311248 start.go:309] selected driver: docker
	I1218 00:35:27.170292 1311248 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.170444 1311248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:35:27.170546 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.230048 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.221277832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.230469 1311248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 00:35:27.230491 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:27.230542 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:27.230580 1311248 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.235511 1311248 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:35:27.238271 1311248 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:35:27.241192 1311248 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:35:27.243943 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:27.243991 1311248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:35:27.243999 1311248 cache.go:65] Caching tarball of preloaded images
	I1218 00:35:27.244040 1311248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:35:27.244087 1311248 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:35:27.244096 1311248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:35:27.244211 1311248 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:35:27.263574 1311248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:35:27.263584 1311248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:35:27.263598 1311248 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:35:27.263628 1311248 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:35:27.263679 1311248 start.go:364] duration metric: took 35.445µs to acquireMachinesLock for "functional-232602"
	I1218 00:35:27.263697 1311248 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:35:27.263701 1311248 fix.go:54] fixHost starting: 
	I1218 00:35:27.263946 1311248 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:35:27.280222 1311248 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:35:27.280243 1311248 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:35:27.283327 1311248 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:35:27.283352 1311248 machine.go:94] provisionDockerMachine start ...
	I1218 00:35:27.283428 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.299920 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.300231 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.300238 1311248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:35:27.452356 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.452370 1311248 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:35:27.452432 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.473471 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.473816 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.473825 1311248 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:35:27.640067 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.640142 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.667013 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.667323 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.667342 1311248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:35:27.820945 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:35:27.820961 1311248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:35:27.820980 1311248 ubuntu.go:190] setting up certificates
	I1218 00:35:27.820989 1311248 provision.go:84] configureAuth start
	I1218 00:35:27.821051 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:27.838852 1311248 provision.go:143] copyHostCerts
	I1218 00:35:27.838916 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:35:27.838924 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:35:27.838994 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:35:27.839097 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:35:27.839100 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:35:27.839128 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:35:27.839186 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:35:27.839190 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:35:27.839213 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:35:27.839265 1311248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:35:28.109890 1311248 provision.go:177] copyRemoteCerts
	I1218 00:35:28.109947 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:35:28.109996 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.127232 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.232344 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:35:28.250086 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:35:28.268448 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:35:28.286339 1311248 provision.go:87] duration metric: took 465.326862ms to configureAuth
	I1218 00:35:28.286357 1311248 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:35:28.286550 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:28.286556 1311248 machine.go:97] duration metric: took 1.003199883s to provisionDockerMachine
	I1218 00:35:28.286562 1311248 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:35:28.286572 1311248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:35:28.286620 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:35:28.286663 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.304025 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.412869 1311248 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:35:28.416834 1311248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:35:28.416854 1311248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:35:28.416865 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:35:28.416921 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:35:28.417025 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:35:28.417099 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:35:28.417168 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:35:28.424798 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:28.442733 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:35:28.462911 1311248 start.go:296] duration metric: took 176.334186ms for postStartSetup
	I1218 00:35:28.462983 1311248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:35:28.463039 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.480489 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.585769 1311248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:35:28.590837 1311248 fix.go:56] duration metric: took 1.327128154s for fixHost
	I1218 00:35:28.590854 1311248 start.go:83] releasing machines lock for "functional-232602", held for 1.327167711s
	I1218 00:35:28.590944 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:28.607738 1311248 ssh_runner.go:195] Run: cat /version.json
	I1218 00:35:28.607789 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.608049 1311248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:35:28.608095 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.626689 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.634380 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.732432 1311248 ssh_runner.go:195] Run: systemctl --version
	I1218 00:35:28.823477 1311248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 00:35:28.828399 1311248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:35:28.828467 1311248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:35:28.836277 1311248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:35:28.836291 1311248 start.go:496] detecting cgroup driver to use...
	I1218 00:35:28.836322 1311248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:35:28.836377 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:35:28.852038 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:35:28.865568 1311248 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:35:28.865634 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:35:28.881324 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:35:28.894482 1311248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:35:29.019814 1311248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:35:29.139455 1311248 docker.go:234] disabling docker service ...
	I1218 00:35:29.139511 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:35:29.157302 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:35:29.172520 1311248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:35:29.290798 1311248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:35:29.409846 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:35:29.423039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:35:29.438313 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:35:29.447458 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:35:29.457161 1311248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:35:29.457221 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:35:29.466703 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.475761 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:35:29.484925 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.493811 1311248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:35:29.502125 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:35:29.511205 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:35:29.520548 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:35:29.530343 1311248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:35:29.538157 1311248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:35:29.545765 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:29.664409 1311248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:35:29.789454 1311248 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:35:29.789537 1311248 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:35:29.793414 1311248 start.go:564] Will wait 60s for crictl version
	I1218 00:35:29.793467 1311248 ssh_runner.go:195] Run: which crictl
	I1218 00:35:29.796922 1311248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:35:29.821478 1311248 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:35:29.821534 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.845973 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.874969 1311248 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:35:29.877886 1311248 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:35:29.897397 1311248 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:35:29.909164 1311248 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1218 00:35:29.912023 1311248 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:35:29.912156 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:29.912246 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.959601 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.959615 1311248 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:35:29.959670 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.987018 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.987029 1311248 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:35:29.987035 1311248 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:35:29.987151 1311248 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:35:29.987219 1311248 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:35:30.033188 1311248 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1218 00:35:30.033262 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:30.033272 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:30.033285 1311248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:35:30.033322 1311248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:35:30.033459 1311248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:35:30.033555 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:35:30.044133 1311248 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:35:30.044224 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:35:30.053566 1311248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:35:30.069600 1311248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:35:30.086185 1311248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1218 00:35:30.100953 1311248 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:35:30.105204 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:30.229133 1311248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:35:30.643842 1311248 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:35:30.643853 1311248 certs.go:195] generating shared ca certs ...
	I1218 00:35:30.643868 1311248 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:35:30.644040 1311248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:35:30.644079 1311248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:35:30.644085 1311248 certs.go:257] generating profile certs ...
	I1218 00:35:30.644187 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:35:30.644248 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:35:30.644287 1311248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:35:30.644391 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:35:30.644420 1311248 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:35:30.644426 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:35:30.644455 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:35:30.644481 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:35:30.644512 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:35:30.644557 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:30.645271 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:35:30.667963 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:35:30.688789 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:35:30.707638 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:35:30.727172 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:35:30.745582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:35:30.763537 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:35:30.781521 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:35:30.799255 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:35:30.816582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:35:30.835230 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:35:30.852513 1311248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:35:30.865555 1311248 ssh_runner.go:195] Run: openssl version
	I1218 00:35:30.871911 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.879397 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:35:30.886681 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890109 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890169 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.930894 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:35:30.938142 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.945286 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:35:30.952538 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956151 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956245 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.997157 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:35:31.005056 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.014006 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:35:31.022034 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025894 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025961 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.067200 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:35:31.075278 1311248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:35:31.079306 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:35:31.123391 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:35:31.165879 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:35:31.208281 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:35:31.249146 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:35:31.290212 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:35:31.331444 1311248 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:31.331522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:35:31.331580 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.356945 1311248 cri.go:89] found id: ""
	I1218 00:35:31.357003 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:35:31.364788 1311248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:35:31.364798 1311248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:35:31.364876 1311248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:35:31.372428 1311248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.372951 1311248 kubeconfig.go:125] found "functional-232602" server: "https://192.168.49.2:8441"
	I1218 00:35:31.374199 1311248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:35:31.382218 1311248 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-18 00:20:57.479200490 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-18 00:35:30.095938034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1218 00:35:31.382230 1311248 kubeadm.go:1161] stopping kube-system containers ...
	I1218 00:35:31.382240 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1218 00:35:31.382293 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.418635 1311248 cri.go:89] found id: ""
	I1218 00:35:31.418695 1311248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 00:35:31.437319 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:35:31.447695 1311248 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 18 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 18 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 18 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 18 00:25 /etc/kubernetes/scheduler.conf
	
	I1218 00:35:31.447757 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:35:31.455511 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:35:31.463139 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.463194 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:35:31.470550 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.478132 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.478200 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.485959 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:35:31.493702 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.493757 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:35:31.501195 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:35:31.509596 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:31.563212 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:32.882945 1311248 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.319707666s)
	I1218 00:35:32.883005 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.109967 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.178681 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.229970 1311248 api_server.go:52] waiting for apiserver process to appear ...
	I1218 00:35:33.230040 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:33.730927 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.230378 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.730284 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.230343 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.730919 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.730993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.230539 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.731124 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.230838 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.730863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.230678 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.730230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.230236 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.731068 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.231109 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.730288 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.230203 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.730234 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.230141 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.730185 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.231143 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.730804 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.237230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.230803 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.730882 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.230533 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.731147 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.230905 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.730814 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.230754 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.730337 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.230375 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.731190 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.230987 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.731023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.230495 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.730322 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.230929 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.730922 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.231058 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.730458 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.230148 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.230494 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.731136 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.231080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.730219 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.230880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.730261 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.230265 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.730444 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.230228 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.730965 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.231030 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.730793 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.231094 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.730432 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.230277 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.730969 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.230206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.731080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.230777 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.730718 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.231042 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.730199 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.230478 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.730807 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.230613 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.730187 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.231163 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.731095 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.231010 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.731081 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.230167 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.730331 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.230144 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.730362 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.230993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.230791 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.731035 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.230946 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.730274 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.230238 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.730202 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.231089 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.730821 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.230480 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.730348 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.230188 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.730212 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.230315 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.730113 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.231120 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.730951 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.230491 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.730452 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.230231 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.730205 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.230525 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.230233 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.731067 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.231079 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.730956 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.230990 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.730196 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.230863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.730884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.230380 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.730826 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.731192 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.230615 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.730900 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.230553 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.730134 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:33.230238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:33.230314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:33.258458 1311248 cri.go:89] found id: ""
	I1218 00:36:33.258472 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.258484 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:33.258490 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:33.258562 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:33.283965 1311248 cri.go:89] found id: ""
	I1218 00:36:33.283979 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.283986 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:33.283991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:33.284048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:33.308663 1311248 cri.go:89] found id: ""
	I1218 00:36:33.308678 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.308693 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:33.308699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:33.308760 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:33.337762 1311248 cri.go:89] found id: ""
	I1218 00:36:33.337775 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.337783 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:33.337788 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:33.337852 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:33.366489 1311248 cri.go:89] found id: ""
	I1218 00:36:33.366503 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.366510 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:33.366515 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:33.366574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:33.401983 1311248 cri.go:89] found id: ""
	I1218 00:36:33.401998 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.402005 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:33.402010 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:33.402067 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:33.436853 1311248 cri.go:89] found id: ""
	I1218 00:36:33.436867 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.436874 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:33.436883 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:33.436893 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:33.504087 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:33.504097 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:33.504107 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:33.570523 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:33.570549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:33.607484 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:33.607500 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:33.664867 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:33.664884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.181388 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:36.191464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:36.191521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:36.214848 1311248 cri.go:89] found id: ""
	I1218 00:36:36.214863 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.214870 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:36.214876 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:36.214933 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:36.241311 1311248 cri.go:89] found id: ""
	I1218 00:36:36.241324 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.241331 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:36.241336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:36.241394 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:36.265257 1311248 cri.go:89] found id: ""
	I1218 00:36:36.265271 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.265279 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:36.265284 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:36.265343 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:36.288492 1311248 cri.go:89] found id: ""
	I1218 00:36:36.288506 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.288513 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:36.288518 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:36.288574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:36.316558 1311248 cri.go:89] found id: ""
	I1218 00:36:36.316573 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.316580 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:36.316585 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:36.316664 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:36.341952 1311248 cri.go:89] found id: ""
	I1218 00:36:36.341966 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.341973 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:36.341979 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:36.342037 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:36.365945 1311248 cri.go:89] found id: ""
	I1218 00:36:36.365959 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.365966 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:36.365974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:36.365983 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:36.426123 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:36.426142 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.444123 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:36.444140 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:36.509193 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:36.509204 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:36.509214 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:36.571649 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:36.571667 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.103696 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:39.113703 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:39.113762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:39.141856 1311248 cri.go:89] found id: ""
	I1218 00:36:39.141870 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.141878 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:39.141883 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:39.141944 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:39.170038 1311248 cri.go:89] found id: ""
	I1218 00:36:39.170052 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.170101 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:39.170107 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:39.170172 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:39.199014 1311248 cri.go:89] found id: ""
	I1218 00:36:39.199028 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.199035 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:39.199041 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:39.199101 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:39.226392 1311248 cri.go:89] found id: ""
	I1218 00:36:39.226414 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.226422 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:39.226427 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:39.226493 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:39.251905 1311248 cri.go:89] found id: ""
	I1218 00:36:39.251920 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.251927 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:39.251932 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:39.251992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:39.276915 1311248 cri.go:89] found id: ""
	I1218 00:36:39.276937 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.276944 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:39.276949 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:39.277007 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:39.301520 1311248 cri.go:89] found id: ""
	I1218 00:36:39.301534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.301542 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:39.301551 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:39.301560 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:39.364240 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:39.364259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.394082 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:39.394098 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:39.460886 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:39.460907 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:39.477258 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:39.477273 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:39.547172 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.048213 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:42.059442 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:42.059521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:42.095887 1311248 cri.go:89] found id: ""
	I1218 00:36:42.095903 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.095911 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:42.095917 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:42.095987 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:42.126738 1311248 cri.go:89] found id: ""
	I1218 00:36:42.126756 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.126763 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:42.126769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:42.126846 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:42.183895 1311248 cri.go:89] found id: ""
	I1218 00:36:42.183916 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.183924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:42.183931 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:42.184005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:42.217296 1311248 cri.go:89] found id: ""
	I1218 00:36:42.217313 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.217320 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:42.217333 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:42.217410 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:42.248021 1311248 cri.go:89] found id: ""
	I1218 00:36:42.248038 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.248065 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:42.248071 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:42.248143 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:42.278624 1311248 cri.go:89] found id: ""
	I1218 00:36:42.278650 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.278658 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:42.278664 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:42.278732 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:42.306575 1311248 cri.go:89] found id: ""
	I1218 00:36:42.306589 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.306604 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:42.306613 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:42.306622 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:42.366835 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:42.366859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:42.381793 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:42.381810 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:42.478588 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.478598 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:42.478608 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:42.541093 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:42.541114 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:45.069751 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:45.106091 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:45.106161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:45.152078 1311248 cri.go:89] found id: ""
	I1218 00:36:45.152105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.152113 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:45.152120 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:45.152202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:45.228849 1311248 cri.go:89] found id: ""
	I1218 00:36:45.228866 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.228874 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:45.228881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:45.229017 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:45.284605 1311248 cri.go:89] found id: ""
	I1218 00:36:45.284640 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.284648 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:45.284654 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:45.284773 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:45.318439 1311248 cri.go:89] found id: ""
	I1218 00:36:45.318454 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.318461 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:45.318467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:45.318532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:45.348962 1311248 cri.go:89] found id: ""
	I1218 00:36:45.348976 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.348984 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:45.348990 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:45.349055 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:45.378098 1311248 cri.go:89] found id: ""
	I1218 00:36:45.378112 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.378119 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:45.378125 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:45.378227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:45.435291 1311248 cri.go:89] found id: ""
	I1218 00:36:45.435311 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.435318 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:45.435335 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:45.435362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:45.505552 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:45.505571 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:45.523778 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:45.523794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:45.592584 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:45.592594 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:45.592606 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:45.658999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:45.659018 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:48.186749 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:48.197169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:48.197230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:48.222369 1311248 cri.go:89] found id: ""
	I1218 00:36:48.222383 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.222390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:48.222396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:48.222459 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:48.247132 1311248 cri.go:89] found id: ""
	I1218 00:36:48.247146 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.247153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:48.247158 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:48.247217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:48.272441 1311248 cri.go:89] found id: ""
	I1218 00:36:48.272455 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.272462 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:48.272467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:48.272526 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:48.302640 1311248 cri.go:89] found id: ""
	I1218 00:36:48.302655 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.302662 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:48.302679 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:48.302737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:48.329411 1311248 cri.go:89] found id: ""
	I1218 00:36:48.329425 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.329433 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:48.329438 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:48.329497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:48.358419 1311248 cri.go:89] found id: ""
	I1218 00:36:48.358433 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.358440 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:48.358445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:48.358503 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:48.383182 1311248 cri.go:89] found id: ""
	I1218 00:36:48.383195 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.383203 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:48.383210 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:48.383220 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:48.451796 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:48.451815 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:48.467080 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:48.467096 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:48.533083 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:48.533092 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:48.533103 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:48.596920 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:48.596940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:51.124756 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:51.135594 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:51.135659 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:51.164133 1311248 cri.go:89] found id: ""
	I1218 00:36:51.164148 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.164156 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:51.164161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:51.164226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:51.190200 1311248 cri.go:89] found id: ""
	I1218 00:36:51.190215 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.190222 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:51.190228 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:51.190291 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:51.216170 1311248 cri.go:89] found id: ""
	I1218 00:36:51.216187 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.216194 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:51.216200 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:51.216263 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:51.246031 1311248 cri.go:89] found id: ""
	I1218 00:36:51.246045 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.246052 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:51.246058 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:51.246122 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:51.278864 1311248 cri.go:89] found id: ""
	I1218 00:36:51.278878 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.278885 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:51.278890 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:51.278963 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:51.303118 1311248 cri.go:89] found id: ""
	I1218 00:36:51.303132 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.303139 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:51.303144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:51.303202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:51.328091 1311248 cri.go:89] found id: ""
	I1218 00:36:51.328105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.328112 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:51.328120 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:51.328130 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:51.385226 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:51.385249 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:51.400951 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:51.400967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:51.479293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:51.479304 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:51.479315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:51.541268 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:51.541288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.069293 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:54.080067 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:54.080153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:54.106375 1311248 cri.go:89] found id: ""
	I1218 00:36:54.106390 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.106402 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:54.106408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:54.106467 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:54.131767 1311248 cri.go:89] found id: ""
	I1218 00:36:54.131781 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.131788 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:54.131793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:54.131850 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:54.157519 1311248 cri.go:89] found id: ""
	I1218 00:36:54.157534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.157541 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:54.157546 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:54.157606 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:54.182381 1311248 cri.go:89] found id: ""
	I1218 00:36:54.182396 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.182403 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:54.182408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:54.182478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:54.211219 1311248 cri.go:89] found id: ""
	I1218 00:36:54.211234 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.211241 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:54.211247 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:54.211323 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:54.243605 1311248 cri.go:89] found id: ""
	I1218 00:36:54.243627 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.243634 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:54.243640 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:54.243710 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:54.268614 1311248 cri.go:89] found id: ""
	I1218 00:36:54.268648 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.268655 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:54.268664 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:54.268675 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:54.332655 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:54.332668 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:54.332679 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:54.396896 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:54.396916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.440350 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:54.440371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:54.503158 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:54.503178 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.019672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:57.030198 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:57.030268 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:57.059845 1311248 cri.go:89] found id: ""
	I1218 00:36:57.059859 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.059866 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:57.059872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:57.059939 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:57.086203 1311248 cri.go:89] found id: ""
	I1218 00:36:57.086217 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.086224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:57.086229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:57.086326 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:57.115321 1311248 cri.go:89] found id: ""
	I1218 00:36:57.115335 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.115342 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:57.115347 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:57.115416 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:57.141717 1311248 cri.go:89] found id: ""
	I1218 00:36:57.141731 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.141738 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:57.141743 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:57.141801 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:57.166376 1311248 cri.go:89] found id: ""
	I1218 00:36:57.166389 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.166396 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:57.166400 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:57.166470 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:57.194461 1311248 cri.go:89] found id: ""
	I1218 00:36:57.194475 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.194494 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:57.194500 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:57.194557 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:57.219267 1311248 cri.go:89] found id: ""
	I1218 00:36:57.219280 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.219287 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:57.219295 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:57.219305 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:57.274913 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:57.274932 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.290015 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:57.290032 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:57.353493 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:57.353504 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:57.353514 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:57.424372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:57.424400 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:59.955778 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:59.965801 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:59.965861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:59.993708 1311248 cri.go:89] found id: ""
	I1218 00:36:59.993722 1311248 logs.go:282] 0 containers: []
	W1218 00:36:59.993729 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:59.993734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:59.993792 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:00.055250 1311248 cri.go:89] found id: ""
	I1218 00:37:00.055266 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.055274 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:00.055280 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:00.055388 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:00.117792 1311248 cri.go:89] found id: ""
	I1218 00:37:00.117810 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.117818 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:00.117824 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:00.117903 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:00.170362 1311248 cri.go:89] found id: ""
	I1218 00:37:00.170378 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.170394 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:00.170401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:00.170482 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:00.229984 1311248 cri.go:89] found id: ""
	I1218 00:37:00.230002 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.230010 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:00.230015 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:00.230094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:00.264809 1311248 cri.go:89] found id: ""
	I1218 00:37:00.264826 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.264833 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:00.264839 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:00.264908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:00.313700 1311248 cri.go:89] found id: ""
	I1218 00:37:00.313718 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.313725 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:00.313734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:00.313747 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:00.390802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:00.390825 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:00.428189 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:00.428207 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:00.494729 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:00.494750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:00.511226 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:00.511245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:00.579855 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.080114 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:03.090701 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:03.090768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:03.123581 1311248 cri.go:89] found id: ""
	I1218 00:37:03.123596 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.123603 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:03.123608 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:03.123666 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:03.148602 1311248 cri.go:89] found id: ""
	I1218 00:37:03.148615 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.148657 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:03.148662 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:03.148733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:03.174826 1311248 cri.go:89] found id: ""
	I1218 00:37:03.174840 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.174848 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:03.174853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:03.174927 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:03.200912 1311248 cri.go:89] found id: ""
	I1218 00:37:03.200926 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.200933 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:03.200939 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:03.200998 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:03.226151 1311248 cri.go:89] found id: ""
	I1218 00:37:03.226166 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.226173 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:03.226179 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:03.226237 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:03.253785 1311248 cri.go:89] found id: ""
	I1218 00:37:03.253799 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.253806 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:03.253812 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:03.253878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:03.279482 1311248 cri.go:89] found id: ""
	I1218 00:37:03.279495 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.279502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:03.279510 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:03.279521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:03.294545 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:03.294563 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:03.360050 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.360059 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:03.360071 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:03.423132 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:03.423151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:03.461805 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:03.461820 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.018802 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:06.030336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:06.030406 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:06.056426 1311248 cri.go:89] found id: ""
	I1218 00:37:06.056440 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.056447 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:06.056453 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:06.056513 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:06.086319 1311248 cri.go:89] found id: ""
	I1218 00:37:06.086333 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.086341 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:06.086346 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:06.086413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:06.112062 1311248 cri.go:89] found id: ""
	I1218 00:37:06.112077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.112084 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:06.112089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:06.112157 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:06.137317 1311248 cri.go:89] found id: ""
	I1218 00:37:06.137331 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.137344 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:06.137351 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:06.137419 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:06.165090 1311248 cri.go:89] found id: ""
	I1218 00:37:06.165104 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.165111 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:06.165116 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:06.165174 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:06.190738 1311248 cri.go:89] found id: ""
	I1218 00:37:06.190753 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.190759 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:06.190765 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:06.190822 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:06.215038 1311248 cri.go:89] found id: ""
	I1218 00:37:06.215066 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.215075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:06.215083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:06.215094 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.270893 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:06.270915 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:06.285817 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:06.285834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:06.354768 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:06.354777 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:06.354787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:06.416937 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:06.416957 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:08.951149 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:08.961238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:08.961297 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:08.985900 1311248 cri.go:89] found id: ""
	I1218 00:37:08.985916 1311248 logs.go:282] 0 containers: []
	W1218 00:37:08.985923 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:08.985928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:08.985993 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:09.016022 1311248 cri.go:89] found id: ""
	I1218 00:37:09.016036 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.016043 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:09.016048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:09.016106 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:09.040820 1311248 cri.go:89] found id: ""
	I1218 00:37:09.040841 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.040849 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:09.040853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:09.040912 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:09.065452 1311248 cri.go:89] found id: ""
	I1218 00:37:09.065466 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.065473 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:09.065478 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:09.065539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:09.095062 1311248 cri.go:89] found id: ""
	I1218 00:37:09.095077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.095083 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:09.095089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:09.095151 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:09.120274 1311248 cri.go:89] found id: ""
	I1218 00:37:09.120287 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.120294 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:09.120300 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:09.120366 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:09.144652 1311248 cri.go:89] found id: ""
	I1218 00:37:09.144667 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.144674 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:09.144683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:09.144700 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:09.159355 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:09.159371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:09.224560 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:09.224571 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:09.224582 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:09.286931 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:09.286951 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:09.318873 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:09.318888 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:11.876699 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:11.887524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:11.887583 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:11.913617 1311248 cri.go:89] found id: ""
	I1218 00:37:11.913631 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.913638 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:11.913643 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:11.913701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:11.942203 1311248 cri.go:89] found id: ""
	I1218 00:37:11.942219 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.942226 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:11.942231 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:11.942292 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:11.967671 1311248 cri.go:89] found id: ""
	I1218 00:37:11.967685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.967692 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:11.967697 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:11.967766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:11.992422 1311248 cri.go:89] found id: ""
	I1218 00:37:11.992437 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.992443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:11.992448 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:11.992505 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:12.031034 1311248 cri.go:89] found id: ""
	I1218 00:37:12.031049 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.031056 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:12.031061 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:12.031119 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:12.057654 1311248 cri.go:89] found id: ""
	I1218 00:37:12.057669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.057677 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:12.057682 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:12.057764 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:12.082063 1311248 cri.go:89] found id: ""
	I1218 00:37:12.082078 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.082084 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:12.082092 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:12.082102 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:12.111103 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:12.111119 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:12.168426 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:12.168446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:12.183407 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:12.183423 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:12.251784 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:12.251803 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:12.251814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:14.823080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:14.834459 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:14.834525 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:14.860258 1311248 cri.go:89] found id: ""
	I1218 00:37:14.860272 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.860278 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:14.860283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:14.860341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:14.884703 1311248 cri.go:89] found id: ""
	I1218 00:37:14.884722 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.884729 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:14.884734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:14.884794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:14.909031 1311248 cri.go:89] found id: ""
	I1218 00:37:14.909046 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.909054 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:14.909059 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:14.909130 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:14.934504 1311248 cri.go:89] found id: ""
	I1218 00:37:14.934518 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.934525 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:14.934531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:14.934590 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:14.965623 1311248 cri.go:89] found id: ""
	I1218 00:37:14.965638 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.965646 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:14.965651 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:14.965718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:14.991607 1311248 cri.go:89] found id: ""
	I1218 00:37:14.991623 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.991631 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:14.991636 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:14.991711 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:15.027331 1311248 cri.go:89] found id: ""
	I1218 00:37:15.027347 1311248 logs.go:282] 0 containers: []
	W1218 00:37:15.027355 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:15.027364 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:15.027376 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:15.102509 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:15.102519 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:15.102530 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:15.167080 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:15.167101 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:15.200488 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:15.200504 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:15.261320 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:15.261342 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:17.777092 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:17.788005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:17.788070 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:17.813820 1311248 cri.go:89] found id: ""
	I1218 00:37:17.813834 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.813841 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:17.813846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:17.813906 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:17.841574 1311248 cri.go:89] found id: ""
	I1218 00:37:17.841588 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.841605 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:17.841610 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:17.841679 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:17.865628 1311248 cri.go:89] found id: ""
	I1218 00:37:17.865644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.865650 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:17.865656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:17.865713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:17.891259 1311248 cri.go:89] found id: ""
	I1218 00:37:17.891273 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.891289 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:17.891295 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:17.891363 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:17.918377 1311248 cri.go:89] found id: ""
	I1218 00:37:17.918391 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.918398 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:17.918403 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:17.918461 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:17.948139 1311248 cri.go:89] found id: ""
	I1218 00:37:17.948171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.948178 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:17.948183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:17.948251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:17.971855 1311248 cri.go:89] found id: ""
	I1218 00:37:17.971869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.971876 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:17.971884 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:17.971894 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:18.026594 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:18.026614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:18.042303 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:18.042328 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:18.108683 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:18.108704 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:18.108729 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:18.172657 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:18.172676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:20.704818 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:20.715060 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:20.715120 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:20.741147 1311248 cri.go:89] found id: ""
	I1218 00:37:20.741161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.741168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:20.741174 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:20.741231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:20.765846 1311248 cri.go:89] found id: ""
	I1218 00:37:20.765860 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.765867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:20.765872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:20.765930 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:20.795338 1311248 cri.go:89] found id: ""
	I1218 00:37:20.795351 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.795358 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:20.795364 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:20.795421 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:20.823054 1311248 cri.go:89] found id: ""
	I1218 00:37:20.823068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.823075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:20.823080 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:20.823137 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:20.848186 1311248 cri.go:89] found id: ""
	I1218 00:37:20.848200 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.848208 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:20.848213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:20.848278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:20.872642 1311248 cri.go:89] found id: ""
	I1218 00:37:20.872656 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.872662 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:20.872668 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:20.872771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:20.897151 1311248 cri.go:89] found id: ""
	I1218 00:37:20.897165 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.897172 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:20.897180 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:20.897190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:20.951948 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:20.951968 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:20.966927 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:20.966943 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:21.033275 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:21.033286 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:21.033296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:21.096425 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:21.096445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.624716 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:23.635084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:23.635160 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:23.668648 1311248 cri.go:89] found id: ""
	I1218 00:37:23.668662 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.668670 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:23.668675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:23.668755 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:23.700454 1311248 cri.go:89] found id: ""
	I1218 00:37:23.700468 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.700475 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:23.700480 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:23.700538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:23.732021 1311248 cri.go:89] found id: ""
	I1218 00:37:23.732035 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.732043 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:23.732048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:23.732124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:23.760854 1311248 cri.go:89] found id: ""
	I1218 00:37:23.760868 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.760875 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:23.760881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:23.760942 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:23.786164 1311248 cri.go:89] found id: ""
	I1218 00:37:23.786178 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.786185 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:23.786189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:23.786248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:23.811196 1311248 cri.go:89] found id: ""
	I1218 00:37:23.811220 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.811229 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:23.811234 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:23.811300 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:23.835282 1311248 cri.go:89] found id: ""
	I1218 00:37:23.835297 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.835314 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:23.835323 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:23.835334 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:23.899950 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:23.899970 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:23.899981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:23.966454 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:23.966474 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.994564 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:23.994580 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:24.052734 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:24.052755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.568298 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:26.578561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:26.578622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:26.602733 1311248 cri.go:89] found id: ""
	I1218 00:37:26.602747 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.602755 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:26.602761 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:26.602826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:26.631092 1311248 cri.go:89] found id: ""
	I1218 00:37:26.631106 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.631113 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:26.631118 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:26.631180 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:26.677513 1311248 cri.go:89] found id: ""
	I1218 00:37:26.677528 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.677536 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:26.677541 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:26.677608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:26.712071 1311248 cri.go:89] found id: ""
	I1218 00:37:26.712085 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.712093 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:26.712100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:26.712167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:26.738769 1311248 cri.go:89] found id: ""
	I1218 00:37:26.738783 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.738790 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:26.738795 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:26.738857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:26.764344 1311248 cri.go:89] found id: ""
	I1218 00:37:26.764358 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.764365 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:26.764370 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:26.764428 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:26.790276 1311248 cri.go:89] found id: ""
	I1218 00:37:26.790290 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.790297 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:26.790305 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:26.790315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:26.845607 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:26.845626 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.861063 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:26.861080 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:26.931574 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:26.931584 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:26.931595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:26.998426 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:26.998445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:29.540997 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:29.551044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:29.551103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:29.575146 1311248 cri.go:89] found id: ""
	I1218 00:37:29.575161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.575168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:29.575173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:29.575230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:29.599039 1311248 cri.go:89] found id: ""
	I1218 00:37:29.599052 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.599059 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:29.599064 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:29.599123 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:29.623971 1311248 cri.go:89] found id: ""
	I1218 00:37:29.623985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.623993 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:29.623998 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:29.624057 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:29.653653 1311248 cri.go:89] found id: ""
	I1218 00:37:29.653669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.653675 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:29.653681 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:29.653754 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:29.687572 1311248 cri.go:89] found id: ""
	I1218 00:37:29.687586 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.687593 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:29.687599 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:29.687670 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:29.725789 1311248 cri.go:89] found id: ""
	I1218 00:37:29.725803 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.725811 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:29.725816 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:29.725878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:29.753212 1311248 cri.go:89] found id: ""
	I1218 00:37:29.753226 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.753233 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:29.753241 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:29.753253 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:29.810976 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:29.810996 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:29.825952 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:29.825969 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:29.893717 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:29.893736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:29.893748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:29.959773 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:29.959794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:32.492460 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:32.502745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:32.502807 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:32.528416 1311248 cri.go:89] found id: ""
	I1218 00:37:32.528431 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.528438 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:32.528443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:32.528501 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:32.553770 1311248 cri.go:89] found id: ""
	I1218 00:37:32.553785 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.553792 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:32.553798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:32.553861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:32.577941 1311248 cri.go:89] found id: ""
	I1218 00:37:32.577956 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.577963 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:32.577969 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:32.578028 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:32.604043 1311248 cri.go:89] found id: ""
	I1218 00:37:32.604058 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.604075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:32.604081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:32.604159 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:32.629080 1311248 cri.go:89] found id: ""
	I1218 00:37:32.629095 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.629102 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:32.629108 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:32.629167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:32.664156 1311248 cri.go:89] found id: ""
	I1218 00:37:32.664171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.664187 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:32.664193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:32.664281 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:32.692107 1311248 cri.go:89] found id: ""
	I1218 00:37:32.692141 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.692149 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:32.692158 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:32.692168 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:32.758211 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:32.758238 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:32.774028 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:32.774047 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:32.839724 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:32.839734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:32.839749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:32.905609 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:32.905633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:35.434204 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:35.445035 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:35.445099 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:35.470531 1311248 cri.go:89] found id: ""
	I1218 00:37:35.470545 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.470553 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:35.470558 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:35.470621 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:35.494976 1311248 cri.go:89] found id: ""
	I1218 00:37:35.494990 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.494996 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:35.495001 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:35.495063 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:35.519629 1311248 cri.go:89] found id: ""
	I1218 00:37:35.519644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.519651 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:35.519656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:35.519714 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:35.544438 1311248 cri.go:89] found id: ""
	I1218 00:37:35.544453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.544460 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:35.544465 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:35.544523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:35.569684 1311248 cri.go:89] found id: ""
	I1218 00:37:35.569699 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.569706 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:35.569712 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:35.569771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:35.595541 1311248 cri.go:89] found id: ""
	I1218 00:37:35.595556 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.595563 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:35.595568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:35.595632 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:35.620307 1311248 cri.go:89] found id: ""
	I1218 00:37:35.620321 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.620328 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:35.620336 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:35.620346 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:35.678927 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:35.678945 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:35.697469 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:35.697488 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:35.774692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:35.774703 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:35.774713 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:35.836772 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:35.836792 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:38.369786 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:38.380243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:38.380304 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:38.406412 1311248 cri.go:89] found id: ""
	I1218 00:37:38.406426 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.406433 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:38.406439 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:38.406497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:38.431433 1311248 cri.go:89] found id: ""
	I1218 00:37:38.431447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.431454 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:38.431460 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:38.431518 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:38.455854 1311248 cri.go:89] found id: ""
	I1218 00:37:38.455869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.455876 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:38.455881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:38.455943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:38.480414 1311248 cri.go:89] found id: ""
	I1218 00:37:38.480428 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.480435 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:38.480440 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:38.480497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:38.506521 1311248 cri.go:89] found id: ""
	I1218 00:37:38.506535 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.506551 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:38.506557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:38.506630 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:38.531738 1311248 cri.go:89] found id: ""
	I1218 00:37:38.531762 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.531769 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:38.531774 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:38.531840 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:38.557054 1311248 cri.go:89] found id: ""
	I1218 00:37:38.557068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.557075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:38.557083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:38.557092 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:38.613102 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:38.613120 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:38.627653 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:38.627670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:38.723568 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:38.723579 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:38.723591 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:38.784988 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:38.785008 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:41.315880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:41.326378 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:41.326457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:41.351366 1311248 cri.go:89] found id: ""
	I1218 00:37:41.351381 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.351390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:41.351395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:41.351454 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:41.376110 1311248 cri.go:89] found id: ""
	I1218 00:37:41.376124 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.376131 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:41.376137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:41.376192 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:41.401062 1311248 cri.go:89] found id: ""
	I1218 00:37:41.401075 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.401082 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:41.401087 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:41.401146 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:41.425454 1311248 cri.go:89] found id: ""
	I1218 00:37:41.425469 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.425475 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:41.425481 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:41.425539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:41.454711 1311248 cri.go:89] found id: ""
	I1218 00:37:41.454724 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.454732 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:41.454737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:41.454799 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:41.479667 1311248 cri.go:89] found id: ""
	I1218 00:37:41.479681 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.479688 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:41.479694 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:41.479752 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:41.504248 1311248 cri.go:89] found id: ""
	I1218 00:37:41.504261 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.504268 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:41.504276 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:41.504323 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:41.559589 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:41.559609 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:41.574018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:41.574034 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:41.637175 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:41.637186 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:41.637196 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:41.712099 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:41.712122 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.243063 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:44.253213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:44.253272 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:44.278124 1311248 cri.go:89] found id: ""
	I1218 00:37:44.278138 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.278145 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:44.278150 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:44.278211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:44.302729 1311248 cri.go:89] found id: ""
	I1218 00:37:44.302743 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.302750 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:44.302755 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:44.302813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:44.327369 1311248 cri.go:89] found id: ""
	I1218 00:37:44.327384 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.327391 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:44.327396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:44.327458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:44.351769 1311248 cri.go:89] found id: ""
	I1218 00:37:44.351784 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.351791 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:44.351796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:44.351858 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:44.378488 1311248 cri.go:89] found id: ""
	I1218 00:37:44.378502 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.378509 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:44.378514 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:44.378574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:44.404134 1311248 cri.go:89] found id: ""
	I1218 00:37:44.404149 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.404156 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:44.404161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:44.404219 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:44.428529 1311248 cri.go:89] found id: ""
	I1218 00:37:44.428543 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.428551 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:44.428559 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:44.428570 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:44.443196 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:44.443212 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:44.505692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:44.505702 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:44.505712 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:44.571665 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:44.571686 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.600535 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:44.600553 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.157844 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:47.168414 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:47.168474 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:47.197971 1311248 cri.go:89] found id: ""
	I1218 00:37:47.197985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.197992 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:47.197997 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:47.198054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:47.223237 1311248 cri.go:89] found id: ""
	I1218 00:37:47.223251 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.223258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:47.223263 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:47.223322 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:47.251998 1311248 cri.go:89] found id: ""
	I1218 00:37:47.252018 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.252025 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:47.252031 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:47.252089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:47.275741 1311248 cri.go:89] found id: ""
	I1218 00:37:47.275755 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.275764 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:47.275769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:47.275826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:47.302583 1311248 cri.go:89] found id: ""
	I1218 00:37:47.302597 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.302604 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:47.302609 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:47.302665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:47.327501 1311248 cri.go:89] found id: ""
	I1218 00:37:47.327516 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.327523 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:47.327528 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:47.327594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:47.352433 1311248 cri.go:89] found id: ""
	I1218 00:37:47.352447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.352454 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:47.352463 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:47.352473 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.410340 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:47.410362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:47.425365 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:47.425388 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:47.492532 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:47.492542 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:47.492562 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:47.553805 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:47.553828 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.086246 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:50.097136 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:50.097206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:50.124671 1311248 cri.go:89] found id: ""
	I1218 00:37:50.124685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.124693 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:50.124698 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:50.124766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:50.150439 1311248 cri.go:89] found id: ""
	I1218 00:37:50.150453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.150460 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:50.150464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:50.150523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:50.174899 1311248 cri.go:89] found id: ""
	I1218 00:37:50.174913 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.174921 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:50.174926 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:50.174992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:50.200398 1311248 cri.go:89] found id: ""
	I1218 00:37:50.200412 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.200420 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:50.200425 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:50.200486 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:50.226325 1311248 cri.go:89] found id: ""
	I1218 00:37:50.226338 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.226345 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:50.226350 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:50.226409 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:50.251194 1311248 cri.go:89] found id: ""
	I1218 00:37:50.251208 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.251215 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:50.251220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:50.251287 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:50.278029 1311248 cri.go:89] found id: ""
	I1218 00:37:50.278043 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.278050 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:50.278057 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:50.278067 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:50.338421 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:50.338443 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.368542 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:50.368565 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:50.423715 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:50.423734 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:50.438292 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:50.438308 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:50.499550 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:52.999811 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:53.011389 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:53.011453 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:53.036842 1311248 cri.go:89] found id: ""
	I1218 00:37:53.036861 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.036869 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:53.036884 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:53.036981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:53.069368 1311248 cri.go:89] found id: ""
	I1218 00:37:53.069383 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.069391 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:53.069397 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:53.069458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:53.093990 1311248 cri.go:89] found id: ""
	I1218 00:37:53.094004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.094011 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:53.094016 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:53.094076 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:53.119386 1311248 cri.go:89] found id: ""
	I1218 00:37:53.119400 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.119417 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:53.119423 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:53.119487 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:53.144979 1311248 cri.go:89] found id: ""
	I1218 00:37:53.144992 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.144999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:53.145005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:53.145062 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:53.171485 1311248 cri.go:89] found id: ""
	I1218 00:37:53.171499 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.171506 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:53.171512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:53.171570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:53.198517 1311248 cri.go:89] found id: ""
	I1218 00:37:53.198530 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.198537 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:53.198545 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:53.198556 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:53.225701 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:53.225719 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:53.280281 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:53.280300 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:53.295217 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:53.295235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:53.360920 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:53.360930 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:53.360940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:55.923673 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:55.935823 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:55.935880 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:55.963196 1311248 cri.go:89] found id: ""
	I1218 00:37:55.963210 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.963217 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:55.963222 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:55.963278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:55.992688 1311248 cri.go:89] found id: ""
	I1218 00:37:55.992701 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.992708 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:55.992713 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:55.992778 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:56.032683 1311248 cri.go:89] found id: ""
	I1218 00:37:56.032696 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.032705 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:56.032711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:56.032779 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:56.061554 1311248 cri.go:89] found id: ""
	I1218 00:37:56.061568 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.061575 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:56.061580 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:56.061639 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:56.090855 1311248 cri.go:89] found id: ""
	I1218 00:37:56.090869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.090877 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:56.090882 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:56.090943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:56.115990 1311248 cri.go:89] found id: ""
	I1218 00:37:56.116004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.116020 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:56.116026 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:56.116085 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:56.141361 1311248 cri.go:89] found id: ""
	I1218 00:37:56.141385 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.141393 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:56.141401 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:56.141412 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:56.202998 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:56.203008 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:56.203019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:56.263974 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:56.263994 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:56.295494 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:56.295509 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:56.350431 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:56.350450 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:58.867454 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:58.877799 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:58.877861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:58.929615 1311248 cri.go:89] found id: ""
	I1218 00:37:58.929629 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.929636 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:58.929642 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:58.929701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:58.958880 1311248 cri.go:89] found id: ""
	I1218 00:37:58.958894 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.958900 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:58.958906 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:58.958965 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:58.983460 1311248 cri.go:89] found id: ""
	I1218 00:37:58.983475 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.983482 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:58.983487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:58.983547 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:59.009476 1311248 cri.go:89] found id: ""
	I1218 00:37:59.009490 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.009497 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:59.009503 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:59.009563 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:59.033436 1311248 cri.go:89] found id: ""
	I1218 00:37:59.033450 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.033457 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:59.033462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:59.033522 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:59.058635 1311248 cri.go:89] found id: ""
	I1218 00:37:59.058649 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.058656 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:59.058661 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:59.058719 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:59.082644 1311248 cri.go:89] found id: ""
	I1218 00:37:59.082658 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.082666 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:59.082673 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:59.082684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:59.138067 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:59.138085 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:59.154868 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:59.154884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:59.232032 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:59.232043 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:59.232061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:59.297264 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:59.297288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:01.827672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:01.838270 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:01.838330 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:01.862836 1311248 cri.go:89] found id: ""
	I1218 00:38:01.862855 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.862862 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:01.862867 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:01.862925 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:01.892782 1311248 cri.go:89] found id: ""
	I1218 00:38:01.892797 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.892804 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:01.892810 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:01.892876 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:01.919043 1311248 cri.go:89] found id: ""
	I1218 00:38:01.919068 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.919076 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:01.919081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:01.919148 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:01.945252 1311248 cri.go:89] found id: ""
	I1218 00:38:01.945267 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.945285 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:01.945291 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:01.945368 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:01.974338 1311248 cri.go:89] found id: ""
	I1218 00:38:01.974353 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.974361 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:01.974366 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:01.974433 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:02.003307 1311248 cri.go:89] found id: ""
	I1218 00:38:02.003324 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.003332 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:02.003339 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:02.003423 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:02.030938 1311248 cri.go:89] found id: ""
	I1218 00:38:02.030953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.030960 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:02.030968 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:02.030979 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:02.100511 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:02.100521 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:02.100531 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:02.162112 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:02.162132 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:02.191957 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:02.191976 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:02.248095 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:02.248116 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:04.765008 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:04.775100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:04.775168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:04.799097 1311248 cri.go:89] found id: ""
	I1218 00:38:04.799125 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.799132 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:04.799137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:04.799206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:04.826968 1311248 cri.go:89] found id: ""
	I1218 00:38:04.826993 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.827000 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:04.827005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:04.827083 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:04.860005 1311248 cri.go:89] found id: ""
	I1218 00:38:04.860020 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.860027 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:04.860032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:04.860103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:04.886293 1311248 cri.go:89] found id: ""
	I1218 00:38:04.886307 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.886315 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:04.886320 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:04.886385 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:04.918579 1311248 cri.go:89] found id: ""
	I1218 00:38:04.918594 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.918601 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:04.918607 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:04.918676 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:04.945152 1311248 cri.go:89] found id: ""
	I1218 00:38:04.945167 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.945183 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:04.945189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:04.945258 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:04.976410 1311248 cri.go:89] found id: ""
	I1218 00:38:04.976424 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.976432 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:04.976439 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:04.976449 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:05.032080 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:05.032100 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:05.047379 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:05.047396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:05.113965 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:05.113975 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:05.113986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:05.174878 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:05.174897 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:07.706926 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:07.717077 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:07.717140 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:07.741430 1311248 cri.go:89] found id: ""
	I1218 00:38:07.741464 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.741471 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:07.741477 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:07.741538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:07.766770 1311248 cri.go:89] found id: ""
	I1218 00:38:07.766784 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.766791 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:07.766796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:07.766855 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:07.790902 1311248 cri.go:89] found id: ""
	I1218 00:38:07.790917 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.790924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:07.790929 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:07.791005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:07.819681 1311248 cri.go:89] found id: ""
	I1218 00:38:07.819696 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.819703 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:07.819708 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:07.819770 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:07.844498 1311248 cri.go:89] found id: ""
	I1218 00:38:07.844512 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.844519 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:07.844524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:07.844584 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:07.870028 1311248 cri.go:89] found id: ""
	I1218 00:38:07.870043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.870050 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:07.870057 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:07.870125 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:07.906969 1311248 cri.go:89] found id: ""
	I1218 00:38:07.906984 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.906999 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:07.907007 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:07.907017 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:07.974278 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:07.974306 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:07.989533 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:07.989551 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:08.055867 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:08.055877 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:08.055889 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:08.118669 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:08.118693 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:10.651292 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:10.663394 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:10.663471 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:10.687520 1311248 cri.go:89] found id: ""
	I1218 00:38:10.687534 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.687542 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:10.687547 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:10.687608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:10.713147 1311248 cri.go:89] found id: ""
	I1218 00:38:10.713161 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.713168 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:10.713173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:10.713231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:10.737926 1311248 cri.go:89] found id: ""
	I1218 00:38:10.737940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.737948 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:10.737953 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:10.738012 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:10.763422 1311248 cri.go:89] found id: ""
	I1218 00:38:10.763436 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.763443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:10.763449 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:10.763508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:10.788619 1311248 cri.go:89] found id: ""
	I1218 00:38:10.788659 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.788672 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:10.788677 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:10.788738 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:10.813718 1311248 cri.go:89] found id: ""
	I1218 00:38:10.813732 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.813740 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:10.813745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:10.813803 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:10.837575 1311248 cri.go:89] found id: ""
	I1218 00:38:10.837588 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.837595 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:10.837603 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:10.837614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:10.852133 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:10.852149 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:10.917780 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:10.917791 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:10.917801 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:10.987674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:10.987695 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:11.024530 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:11.024549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.581947 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:13.592491 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:13.592556 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:13.617579 1311248 cri.go:89] found id: ""
	I1218 00:38:13.617593 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.617600 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:13.617605 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:13.617665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:13.641975 1311248 cri.go:89] found id: ""
	I1218 00:38:13.641990 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.641997 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:13.642002 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:13.642060 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:13.667128 1311248 cri.go:89] found id: ""
	I1218 00:38:13.667142 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.667149 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:13.667154 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:13.667215 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:13.699564 1311248 cri.go:89] found id: ""
	I1218 00:38:13.699579 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.699586 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:13.699591 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:13.699655 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:13.727620 1311248 cri.go:89] found id: ""
	I1218 00:38:13.727634 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.727641 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:13.727646 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:13.727703 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:13.756118 1311248 cri.go:89] found id: ""
	I1218 00:38:13.756132 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.756138 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:13.756144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:13.756204 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:13.780706 1311248 cri.go:89] found id: ""
	I1218 00:38:13.780720 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.780728 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:13.780736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:13.780746 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:13.842845 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:13.842864 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:13.871826 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:13.871843 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.932300 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:13.932319 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:13.950089 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:13.950106 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:14.022114 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.522391 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:16.534271 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:16.534357 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:16.558729 1311248 cri.go:89] found id: ""
	I1218 00:38:16.558743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.558757 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:16.558762 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:16.558819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:16.587758 1311248 cri.go:89] found id: ""
	I1218 00:38:16.587772 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.587779 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:16.587784 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:16.587841 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:16.612793 1311248 cri.go:89] found id: ""
	I1218 00:38:16.612807 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.612814 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:16.612819 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:16.612907 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:16.637417 1311248 cri.go:89] found id: ""
	I1218 00:38:16.637431 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.637438 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:16.637443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:16.637508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:16.662059 1311248 cri.go:89] found id: ""
	I1218 00:38:16.662073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.662080 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:16.662085 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:16.662141 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:16.686710 1311248 cri.go:89] found id: ""
	I1218 00:38:16.686724 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.686731 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:16.686737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:16.686794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:16.711539 1311248 cri.go:89] found id: ""
	I1218 00:38:16.711553 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.711561 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:16.711569 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:16.711579 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:16.739136 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:16.739151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:16.794672 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:16.794694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:16.809147 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:16.809171 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:16.878702 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.878711 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:16.878723 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.444575 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:19.454827 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:19.454887 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:19.482057 1311248 cri.go:89] found id: ""
	I1218 00:38:19.482071 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.482078 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:19.482083 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:19.482142 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:19.505124 1311248 cri.go:89] found id: ""
	I1218 00:38:19.505138 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.505146 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:19.505151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:19.505209 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:19.530010 1311248 cri.go:89] found id: ""
	I1218 00:38:19.530024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.530031 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:19.530037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:19.530094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:19.555994 1311248 cri.go:89] found id: ""
	I1218 00:38:19.556008 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.556025 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:19.556030 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:19.556087 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:19.580515 1311248 cri.go:89] found id: ""
	I1218 00:38:19.580539 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.580546 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:19.580554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:19.580619 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:19.605333 1311248 cri.go:89] found id: ""
	I1218 00:38:19.605348 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.605354 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:19.605360 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:19.605418 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:19.630483 1311248 cri.go:89] found id: ""
	I1218 00:38:19.630497 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.630504 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:19.630512 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:19.630522 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:19.693128 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:19.693138 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:19.693148 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.755570 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:19.755590 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:19.785139 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:19.785156 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:19.842579 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:19.842605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.358338 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:22.368724 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:22.368793 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:22.392394 1311248 cri.go:89] found id: ""
	I1218 00:38:22.392408 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.392415 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:22.392420 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:22.392478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:22.419029 1311248 cri.go:89] found id: ""
	I1218 00:38:22.419043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.419050 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:22.419055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:22.419117 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:22.443838 1311248 cri.go:89] found id: ""
	I1218 00:38:22.443852 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.443859 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:22.443864 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:22.443923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:22.467780 1311248 cri.go:89] found id: ""
	I1218 00:38:22.467794 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.467801 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:22.467807 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:22.467864 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:22.497254 1311248 cri.go:89] found id: ""
	I1218 00:38:22.497268 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.497276 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:22.497281 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:22.497340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:22.521672 1311248 cri.go:89] found id: ""
	I1218 00:38:22.521686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.521693 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:22.521699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:22.521758 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:22.548085 1311248 cri.go:89] found id: ""
	I1218 00:38:22.548119 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.548126 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:22.548134 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:22.548144 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:22.614828 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:22.614852 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:22.643447 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:22.643462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:22.698947 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:22.698967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.713971 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:22.713986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:22.789955 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.290158 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:25.300164 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:25.300226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:25.323897 1311248 cri.go:89] found id: ""
	I1218 00:38:25.323912 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.323919 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:25.323924 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:25.323985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:25.352232 1311248 cri.go:89] found id: ""
	I1218 00:38:25.352245 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.352252 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:25.352257 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:25.352314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:25.376749 1311248 cri.go:89] found id: ""
	I1218 00:38:25.376785 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.376792 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:25.376797 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:25.376868 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:25.401002 1311248 cri.go:89] found id: ""
	I1218 00:38:25.401015 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.401023 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:25.401028 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:25.401089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:25.426497 1311248 cri.go:89] found id: ""
	I1218 00:38:25.426510 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.426517 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:25.426522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:25.426579 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:25.450505 1311248 cri.go:89] found id: ""
	I1218 00:38:25.450518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.450525 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:25.450536 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:25.450593 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:25.478999 1311248 cri.go:89] found id: ""
	I1218 00:38:25.479013 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.479029 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:25.479037 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:25.479048 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:25.540968 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.540977 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:25.540987 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:25.601527 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:25.601546 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:25.633804 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:25.633826 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:25.691056 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:25.691076 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.206639 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:28.217134 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:28.217198 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:28.242357 1311248 cri.go:89] found id: ""
	I1218 00:38:28.242372 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.242378 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:28.242384 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:28.242449 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:28.271155 1311248 cri.go:89] found id: ""
	I1218 00:38:28.271169 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.271176 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:28.271181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:28.271242 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:28.296330 1311248 cri.go:89] found id: ""
	I1218 00:38:28.296345 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.296352 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:28.296357 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:28.296413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:28.320425 1311248 cri.go:89] found id: ""
	I1218 00:38:28.320449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.320456 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:28.320461 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:28.320528 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:28.345590 1311248 cri.go:89] found id: ""
	I1218 00:38:28.345603 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.345610 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:28.345625 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:28.345688 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:28.374296 1311248 cri.go:89] found id: ""
	I1218 00:38:28.374310 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.374334 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:28.374340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:28.374407 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:28.397991 1311248 cri.go:89] found id: ""
	I1218 00:38:28.398006 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.398014 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:28.398023 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:28.398033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:28.453794 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:28.453812 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.468531 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:28.468547 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:28.536754 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:28.536784 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:28.536796 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:28.599155 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:28.599174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:31.143176 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:31.156254 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:31.156313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:31.185437 1311248 cri.go:89] found id: ""
	I1218 00:38:31.185452 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.185460 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:31.185472 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:31.185531 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:31.215130 1311248 cri.go:89] found id: ""
	I1218 00:38:31.215144 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.215153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:31.215157 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:31.215217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:31.240144 1311248 cri.go:89] found id: ""
	I1218 00:38:31.240157 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.240164 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:31.240169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:31.240227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:31.265058 1311248 cri.go:89] found id: ""
	I1218 00:38:31.265072 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.265079 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:31.265084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:31.265150 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:31.289354 1311248 cri.go:89] found id: ""
	I1218 00:38:31.289368 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.289375 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:31.289380 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:31.289438 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:31.319744 1311248 cri.go:89] found id: ""
	I1218 00:38:31.319758 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.319766 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:31.319771 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:31.319826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:31.343739 1311248 cri.go:89] found id: ""
	I1218 00:38:31.343753 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.343760 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:31.343768 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:31.343778 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:31.399267 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:31.399287 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:31.413578 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:31.413595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:31.478705 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:31.478714 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:31.478724 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:31.540680 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:31.540703 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.068816 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:34.079525 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:34.079589 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:34.106415 1311248 cri.go:89] found id: ""
	I1218 00:38:34.106432 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.106440 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:34.106445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:34.106506 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:34.131181 1311248 cri.go:89] found id: ""
	I1218 00:38:34.131195 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.131202 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:34.131208 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:34.131265 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:34.166885 1311248 cri.go:89] found id: ""
	I1218 00:38:34.166898 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.166906 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:34.166911 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:34.166970 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:34.197771 1311248 cri.go:89] found id: ""
	I1218 00:38:34.197786 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.197793 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:34.197798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:34.197856 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:34.226531 1311248 cri.go:89] found id: ""
	I1218 00:38:34.226546 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.226552 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:34.226557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:34.226614 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:34.252100 1311248 cri.go:89] found id: ""
	I1218 00:38:34.252114 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.252121 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:34.252127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:34.252185 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:34.278653 1311248 cri.go:89] found id: ""
	I1218 00:38:34.278667 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.278675 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:34.278683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:34.278694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:34.293444 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:34.293463 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:34.359201 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:34.359211 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:34.359221 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:34.420750 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:34.420773 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.449621 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:34.449637 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.006206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:37.019401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:37.019472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:37.047646 1311248 cri.go:89] found id: ""
	I1218 00:38:37.047660 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.047667 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:37.047673 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:37.047733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:37.076612 1311248 cri.go:89] found id: ""
	I1218 00:38:37.076646 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.076653 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:37.076658 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:37.076717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:37.102368 1311248 cri.go:89] found id: ""
	I1218 00:38:37.102383 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.102390 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:37.102395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:37.102452 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:37.126829 1311248 cri.go:89] found id: ""
	I1218 00:38:37.126843 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.126850 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:37.126855 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:37.126913 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:37.159965 1311248 cri.go:89] found id: ""
	I1218 00:38:37.159980 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.159987 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:37.159992 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:37.160048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:37.193535 1311248 cri.go:89] found id: ""
	I1218 00:38:37.193549 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.193558 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:37.193564 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:37.193622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:37.224708 1311248 cri.go:89] found id: ""
	I1218 00:38:37.224723 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.224730 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:37.224738 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:37.224749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:37.287765 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:37.287775 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:37.287787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:37.349218 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:37.349239 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:37.377886 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:37.377902 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.435205 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:37.435224 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:39.950327 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:39.960885 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:39.960948 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:39.985573 1311248 cri.go:89] found id: ""
	I1218 00:38:39.985587 1311248 logs.go:282] 0 containers: []
	W1218 00:38:39.985596 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:39.985602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:39.985662 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:40.020843 1311248 cri.go:89] found id: ""
	I1218 00:38:40.020859 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.020867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:40.020873 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:40.020949 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:40.067991 1311248 cri.go:89] found id: ""
	I1218 00:38:40.068007 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.068015 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:40.068021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:40.068096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:40.097024 1311248 cri.go:89] found id: ""
	I1218 00:38:40.097039 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.097047 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:40.097053 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:40.097118 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:40.127502 1311248 cri.go:89] found id: ""
	I1218 00:38:40.127518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.127526 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:40.127531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:40.127595 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:40.165566 1311248 cri.go:89] found id: ""
	I1218 00:38:40.165580 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.165587 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:40.165593 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:40.165660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:40.204927 1311248 cri.go:89] found id: ""
	I1218 00:38:40.204940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.204948 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:40.204956 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:40.204967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:40.222297 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:40.222314 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:40.292382 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:40.292392 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:40.292403 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:40.353852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:40.353871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:40.385828 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:40.385844 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:42.942427 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:42.952937 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:42.952996 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:42.982184 1311248 cri.go:89] found id: ""
	I1218 00:38:42.982201 1311248 logs.go:282] 0 containers: []
	W1218 00:38:42.982208 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:42.982213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:42.982271 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:43.009928 1311248 cri.go:89] found id: ""
	I1218 00:38:43.009944 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.009952 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:43.009957 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:43.010021 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:43.036384 1311248 cri.go:89] found id: ""
	I1218 00:38:43.036397 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.036405 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:43.036410 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:43.036472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:43.061945 1311248 cri.go:89] found id: ""
	I1218 00:38:43.061959 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.061967 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:43.061972 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:43.062030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:43.087977 1311248 cri.go:89] found id: ""
	I1218 00:38:43.087992 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.087999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:43.088005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:43.088069 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:43.113297 1311248 cri.go:89] found id: ""
	I1218 00:38:43.113312 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.113319 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:43.113324 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:43.113390 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:43.148378 1311248 cri.go:89] found id: ""
	I1218 00:38:43.148392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.148399 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:43.148408 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:43.148419 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:43.218202 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:43.218227 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:43.234424 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:43.234441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:43.295849 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:43.295860 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:43.295871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:43.357903 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:43.357924 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:45.889646 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:45.899918 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:45.899981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:45.923610 1311248 cri.go:89] found id: ""
	I1218 00:38:45.923623 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.923630 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:45.923635 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:45.923696 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:45.949282 1311248 cri.go:89] found id: ""
	I1218 00:38:45.949296 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.949304 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:45.949309 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:45.949371 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:45.974071 1311248 cri.go:89] found id: ""
	I1218 00:38:45.974085 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.974092 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:45.974097 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:45.974153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:45.997865 1311248 cri.go:89] found id: ""
	I1218 00:38:45.997880 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.997887 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:45.997892 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:45.997953 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:46.026399 1311248 cri.go:89] found id: ""
	I1218 00:38:46.026413 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.026426 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:46.026432 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:46.026490 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:46.060011 1311248 cri.go:89] found id: ""
	I1218 00:38:46.060026 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.060033 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:46.060038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:46.060097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:46.095378 1311248 cri.go:89] found id: ""
	I1218 00:38:46.095392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.095398 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:46.095407 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:46.095418 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:46.110828 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:46.110845 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:46.194637 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:46.194647 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:46.194657 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:46.265968 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:46.265989 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:46.298428 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:46.298444 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:48.855794 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:48.868391 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:48.868457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:48.898010 1311248 cri.go:89] found id: ""
	I1218 00:38:48.898024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.898032 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:48.898037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:48.898097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:48.926962 1311248 cri.go:89] found id: ""
	I1218 00:38:48.926976 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.926984 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:48.926989 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:48.927046 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:48.953073 1311248 cri.go:89] found id: ""
	I1218 00:38:48.953096 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.953104 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:48.953109 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:48.953171 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:48.978527 1311248 cri.go:89] found id: ""
	I1218 00:38:48.978542 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.978548 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:48.978554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:48.978611 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:49.005774 1311248 cri.go:89] found id: ""
	I1218 00:38:49.005791 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.005800 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:49.005805 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:49.005881 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:49.032714 1311248 cri.go:89] found id: ""
	I1218 00:38:49.032743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.032751 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:49.032756 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:49.032845 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:49.058437 1311248 cri.go:89] found id: ""
	I1218 00:38:49.058451 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.058459 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:49.058468 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:49.058478 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:49.114793 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:49.114813 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:49.129898 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:49.129916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:49.218168 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:49.218179 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:49.218190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:49.289574 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:49.289595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:51.822637 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:51.833100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:51.833161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:51.858494 1311248 cri.go:89] found id: ""
	I1218 00:38:51.858508 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.858515 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:51.858520 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:51.858609 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:51.883202 1311248 cri.go:89] found id: ""
	I1218 00:38:51.883217 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.883224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:51.883229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:51.883286 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:51.911732 1311248 cri.go:89] found id: ""
	I1218 00:38:51.911746 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.911753 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:51.911758 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:51.911813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:51.937059 1311248 cri.go:89] found id: ""
	I1218 00:38:51.937073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.937080 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:51.937086 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:51.937144 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:51.960983 1311248 cri.go:89] found id: ""
	I1218 00:38:51.960998 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.961016 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:51.961021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:51.961095 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:51.985889 1311248 cri.go:89] found id: ""
	I1218 00:38:51.985904 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.985911 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:51.985916 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:51.985976 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:52.012132 1311248 cri.go:89] found id: ""
	I1218 00:38:52.012147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:52.012155 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:52.012163 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:52.012174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:52.080718 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:52.080736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:52.080748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:52.144427 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:52.144446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:52.176847 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:52.176869 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:52.239307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:52.239325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:54.754340 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:54.764793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:54.764857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:54.794012 1311248 cri.go:89] found id: ""
	I1218 00:38:54.794027 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.794034 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:54.794039 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:54.794096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:54.823133 1311248 cri.go:89] found id: ""
	I1218 00:38:54.823147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.823155 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:54.823160 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:54.823216 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:54.847977 1311248 cri.go:89] found id: ""
	I1218 00:38:54.847991 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.847998 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:54.848003 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:54.848064 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:54.873449 1311248 cri.go:89] found id: ""
	I1218 00:38:54.873462 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.873469 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:54.873475 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:54.873532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:54.897891 1311248 cri.go:89] found id: ""
	I1218 00:38:54.897905 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.897922 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:54.897928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:54.897985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:54.922432 1311248 cri.go:89] found id: ""
	I1218 00:38:54.922449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.922456 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:54.922462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:54.922520 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:54.947869 1311248 cri.go:89] found id: ""
	I1218 00:38:54.947884 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.947908 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:54.947916 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:54.947927 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:55.005409 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:55.005434 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:55.026491 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:55.026508 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:55.094641 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:55.094652 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:55.094663 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:55.159462 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:55.159481 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.695023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:57.706079 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:57.706147 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:57.735083 1311248 cri.go:89] found id: ""
	I1218 00:38:57.735106 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.735114 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:57.735119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:57.735178 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:57.762228 1311248 cri.go:89] found id: ""
	I1218 00:38:57.762242 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.762249 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:57.762255 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:57.762313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:57.787211 1311248 cri.go:89] found id: ""
	I1218 00:38:57.787226 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.787233 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:57.787238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:57.787303 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:57.812671 1311248 cri.go:89] found id: ""
	I1218 00:38:57.812686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.812693 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:57.812699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:57.812762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:57.840939 1311248 cri.go:89] found id: ""
	I1218 00:38:57.840953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.840961 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:57.840966 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:57.841031 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:57.867148 1311248 cri.go:89] found id: ""
	I1218 00:38:57.867163 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.867170 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:57.867175 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:57.867232 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:57.891633 1311248 cri.go:89] found id: ""
	I1218 00:38:57.891648 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.891665 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:57.891674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:57.891684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.918896 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:57.918913 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:57.975605 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:57.975625 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:57.990660 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:57.990676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:58.063038 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:58.063048 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:58.063061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.627359 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:00.638675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:00.638768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:00.669731 1311248 cri.go:89] found id: ""
	I1218 00:39:00.669745 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.669752 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:00.669757 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:00.669824 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:00.697124 1311248 cri.go:89] found id: ""
	I1218 00:39:00.697138 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.697145 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:00.697151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:00.697211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:00.722455 1311248 cri.go:89] found id: ""
	I1218 00:39:00.722469 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.722476 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:00.722486 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:00.722545 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:00.750996 1311248 cri.go:89] found id: ""
	I1218 00:39:00.751010 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.751018 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:00.751023 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:00.751091 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:00.780012 1311248 cri.go:89] found id: ""
	I1218 00:39:00.780026 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.780033 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:00.780038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:00.780105 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:00.807119 1311248 cri.go:89] found id: ""
	I1218 00:39:00.807133 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.807140 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:00.807145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:00.807213 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:00.836658 1311248 cri.go:89] found id: ""
	I1218 00:39:00.836673 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.836681 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:00.836689 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:00.836699 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:00.851616 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:00.851633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:00.919909 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:00.919918 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:00.919929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.985802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:00.985823 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:01.017691 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:01.017707 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.574413 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:03.585024 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:03.585088 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:03.615721 1311248 cri.go:89] found id: ""
	I1218 00:39:03.615735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.615742 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:03.615748 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:03.615811 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:03.641216 1311248 cri.go:89] found id: ""
	I1218 00:39:03.641230 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.641237 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:03.641243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:03.641307 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:03.665604 1311248 cri.go:89] found id: ""
	I1218 00:39:03.665618 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.665625 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:03.665639 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:03.665717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:03.690936 1311248 cri.go:89] found id: ""
	I1218 00:39:03.690951 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.690958 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:03.690970 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:03.691030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:03.716763 1311248 cri.go:89] found id: ""
	I1218 00:39:03.716794 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.716806 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:03.716811 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:03.716898 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:03.742156 1311248 cri.go:89] found id: ""
	I1218 00:39:03.742170 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.742177 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:03.742183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:03.742240 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:03.771205 1311248 cri.go:89] found id: ""
	I1218 00:39:03.771220 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.771227 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:03.771235 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:03.771245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:03.834106 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:03.834127 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:03.863112 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:03.863129 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.919444 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:03.919465 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:03.934588 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:03.934607 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:04.000293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.500788 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:06.511530 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:06.511596 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:06.536538 1311248 cri.go:89] found id: ""
	I1218 00:39:06.536554 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.536562 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:06.536568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:06.536651 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:06.565199 1311248 cri.go:89] found id: ""
	I1218 00:39:06.565213 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.565219 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:06.565224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:06.565283 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:06.589614 1311248 cri.go:89] found id: ""
	I1218 00:39:06.589628 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.589636 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:06.589641 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:06.589700 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:06.614004 1311248 cri.go:89] found id: ""
	I1218 00:39:06.614019 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.614027 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:06.614032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:06.614093 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:06.638819 1311248 cri.go:89] found id: ""
	I1218 00:39:06.638833 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.638841 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:06.638846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:06.638908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:06.666620 1311248 cri.go:89] found id: ""
	I1218 00:39:06.666634 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.666643 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:06.666648 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:06.666707 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:06.694192 1311248 cri.go:89] found id: ""
	I1218 00:39:06.694207 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.694216 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:06.694224 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:06.694235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:06.709318 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:06.709336 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:06.773553 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.773564 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:06.773587 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:06.842917 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:06.842937 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:06.877280 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:06.877296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.433923 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:09.445181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:09.445248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:09.470100 1311248 cri.go:89] found id: ""
	I1218 00:39:09.470115 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.470122 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:09.470127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:09.470184 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:09.499949 1311248 cri.go:89] found id: ""
	I1218 00:39:09.499964 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.499973 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:09.499978 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:09.500044 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:09.526313 1311248 cri.go:89] found id: ""
	I1218 00:39:09.526328 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.526335 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:09.526340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:09.526404 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:09.551831 1311248 cri.go:89] found id: ""
	I1218 00:39:09.551844 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.551851 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:09.551857 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:09.551923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:09.577535 1311248 cri.go:89] found id: ""
	I1218 00:39:09.577549 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.577557 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:09.577561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:09.577622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:09.602570 1311248 cri.go:89] found id: ""
	I1218 00:39:09.602584 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.602591 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:09.602597 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:09.602658 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:09.630715 1311248 cri.go:89] found id: ""
	I1218 00:39:09.630729 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.630736 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:09.630745 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:09.630755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.686840 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:09.686859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:09.703315 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:09.703331 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:09.770650 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:09.770660 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:09.770670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:09.832439 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:09.832457 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:12.361961 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:12.372127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:12.372190 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:12.408061 1311248 cri.go:89] found id: ""
	I1218 00:39:12.408075 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.408082 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:12.408088 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:12.408145 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:12.434860 1311248 cri.go:89] found id: ""
	I1218 00:39:12.434874 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.434881 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:12.434886 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:12.434946 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:12.465255 1311248 cri.go:89] found id: ""
	I1218 00:39:12.465270 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.465278 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:12.465283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:12.465341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:12.494330 1311248 cri.go:89] found id: ""
	I1218 00:39:12.494344 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.494350 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:12.494356 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:12.494420 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:12.518885 1311248 cri.go:89] found id: ""
	I1218 00:39:12.518900 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.518907 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:12.518912 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:12.518973 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:12.543549 1311248 cri.go:89] found id: ""
	I1218 00:39:12.543564 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.543573 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:12.543578 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:12.543641 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:12.568469 1311248 cri.go:89] found id: ""
	I1218 00:39:12.568483 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.568500 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:12.568507 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:12.568519 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:12.624017 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:12.624039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:12.639011 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:12.639028 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:12.703723 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:12.703734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:12.703744 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:12.765331 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:12.765350 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.294913 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:15.308145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:15.308210 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:15.340203 1311248 cri.go:89] found id: ""
	I1218 00:39:15.340218 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.340225 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:15.340230 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:15.340289 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:15.367732 1311248 cri.go:89] found id: ""
	I1218 00:39:15.367747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.367754 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:15.367760 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:15.367818 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:15.398027 1311248 cri.go:89] found id: ""
	I1218 00:39:15.398042 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.398049 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:15.398055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:15.398115 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:15.430352 1311248 cri.go:89] found id: ""
	I1218 00:39:15.430366 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.430373 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:15.430379 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:15.430442 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:15.461268 1311248 cri.go:89] found id: ""
	I1218 00:39:15.461283 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.461291 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:15.461297 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:15.461361 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:15.487656 1311248 cri.go:89] found id: ""
	I1218 00:39:15.487671 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.487678 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:15.487684 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:15.487744 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:15.516835 1311248 cri.go:89] found id: ""
	I1218 00:39:15.516850 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.516858 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:15.516867 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:15.516877 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:15.584348 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:15.584357 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:15.584377 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:15.646829 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:15.646849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.675913 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:15.675929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:15.731421 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:15.731441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.246605 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:18.257277 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:18.257340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:18.282497 1311248 cri.go:89] found id: ""
	I1218 00:39:18.282512 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.282519 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:18.282527 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:18.282594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:18.317178 1311248 cri.go:89] found id: ""
	I1218 00:39:18.317193 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.317200 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:18.317205 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:18.317267 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:18.342018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.342032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.342039 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:18.342044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:18.342098 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:18.366018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.366032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.366040 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:18.366045 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:18.366107 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:18.390880 1311248 cri.go:89] found id: ""
	I1218 00:39:18.390894 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.390902 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:18.390908 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:18.390968 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:18.427152 1311248 cri.go:89] found id: ""
	I1218 00:39:18.427167 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.427174 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:18.427181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:18.427241 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:18.458481 1311248 cri.go:89] found id: ""
	I1218 00:39:18.458495 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.458502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:18.458510 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:18.458521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:18.486379 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:18.486397 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:18.546371 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:18.546396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.561410 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:18.561431 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:18.625094 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:18.625105 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:18.625118 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.187071 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:21.197777 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:21.197842 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:21.228457 1311248 cri.go:89] found id: ""
	I1218 00:39:21.228472 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.228479 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:21.228485 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:21.228551 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:21.254227 1311248 cri.go:89] found id: ""
	I1218 00:39:21.254240 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.254258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:21.254264 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:21.254321 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:21.283166 1311248 cri.go:89] found id: ""
	I1218 00:39:21.283180 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.283187 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:21.283193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:21.283259 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:21.307940 1311248 cri.go:89] found id: ""
	I1218 00:39:21.307954 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.307962 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:21.307967 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:21.308022 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:21.333576 1311248 cri.go:89] found id: ""
	I1218 00:39:21.333590 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.333597 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:21.333602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:21.333660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:21.357404 1311248 cri.go:89] found id: ""
	I1218 00:39:21.357418 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.357425 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:21.357430 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:21.357488 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:21.386789 1311248 cri.go:89] found id: ""
	I1218 00:39:21.386803 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.386811 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:21.386819 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:21.386830 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:21.467813 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:21.467824 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:21.467834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.529999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:21.530019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:21.561213 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:21.561228 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:21.619110 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:21.619128 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.133884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:24.144224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:24.144298 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:24.169895 1311248 cri.go:89] found id: ""
	I1218 00:39:24.169909 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.169916 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:24.169922 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:24.169981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:24.196376 1311248 cri.go:89] found id: ""
	I1218 00:39:24.196390 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.196396 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:24.196401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:24.196464 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:24.220959 1311248 cri.go:89] found id: ""
	I1218 00:39:24.220978 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.220986 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:24.220991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:24.221051 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:24.246721 1311248 cri.go:89] found id: ""
	I1218 00:39:24.246735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.246745 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:24.246751 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:24.246819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:24.271380 1311248 cri.go:89] found id: ""
	I1218 00:39:24.271394 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.271401 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:24.271406 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:24.271466 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:24.298631 1311248 cri.go:89] found id: ""
	I1218 00:39:24.298645 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.298652 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:24.298657 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:24.298713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:24.322933 1311248 cri.go:89] found id: ""
	I1218 00:39:24.322947 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.322965 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:24.322974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:24.322984 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:24.378307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:24.378325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.395279 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:24.395296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:24.478731 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:24.478740 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:24.478750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:24.539558 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:24.539578 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.069527 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:27.079511 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:27.079570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:27.104730 1311248 cri.go:89] found id: ""
	I1218 00:39:27.104747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.104754 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:27.104759 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:27.104826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:27.134528 1311248 cri.go:89] found id: ""
	I1218 00:39:27.134543 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.134551 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:27.134556 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:27.134618 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:27.160290 1311248 cri.go:89] found id: ""
	I1218 00:39:27.160304 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.160311 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:27.160316 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:27.160374 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:27.187607 1311248 cri.go:89] found id: ""
	I1218 00:39:27.187621 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.187628 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:27.187634 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:27.187691 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:27.214602 1311248 cri.go:89] found id: ""
	I1218 00:39:27.214616 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.214623 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:27.214630 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:27.214690 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:27.239452 1311248 cri.go:89] found id: ""
	I1218 00:39:27.239466 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.239474 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:27.239479 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:27.239538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:27.268209 1311248 cri.go:89] found id: ""
	I1218 00:39:27.268232 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.268240 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:27.268248 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:27.268259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:27.283007 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:27.283033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:27.351624 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:27.351634 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:27.351644 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:27.414794 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:27.414814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.449027 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:27.449042 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.008353 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:30.051512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:30.051599 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:30.142207 1311248 cri.go:89] found id: ""
	I1218 00:39:30.142226 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.142234 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:30.142241 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:30.142317 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:30.175952 1311248 cri.go:89] found id: ""
	I1218 00:39:30.175967 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.175979 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:30.175985 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:30.176054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:30.202613 1311248 cri.go:89] found id: ""
	I1218 00:39:30.202640 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.202649 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:30.202655 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:30.202718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:30.229638 1311248 cri.go:89] found id: ""
	I1218 00:39:30.229653 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.229661 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:30.229666 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:30.229728 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:30.261192 1311248 cri.go:89] found id: ""
	I1218 00:39:30.261206 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.261214 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:30.261220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:30.261285 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:30.288158 1311248 cri.go:89] found id: ""
	I1218 00:39:30.288173 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.288180 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:30.288189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:30.288251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:30.314418 1311248 cri.go:89] found id: ""
	I1218 00:39:30.314432 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.314441 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:30.314450 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:30.314462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.369830 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:30.369849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:30.385018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:30.385037 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:30.467908 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:30.467920 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:30.467930 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:30.529075 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:30.529095 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:33.059241 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:33.070119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:33.070182 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:33.095716 1311248 cri.go:89] found id: ""
	I1218 00:39:33.095730 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.095738 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:33.095744 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:33.095804 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:33.121681 1311248 cri.go:89] found id: ""
	I1218 00:39:33.121697 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.121711 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:33.121717 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:33.121783 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:33.147424 1311248 cri.go:89] found id: ""
	I1218 00:39:33.147438 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.147445 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:33.147451 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:33.147514 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:33.173916 1311248 cri.go:89] found id: ""
	I1218 00:39:33.173931 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.173938 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:33.173943 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:33.174004 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:33.199675 1311248 cri.go:89] found id: ""
	I1218 00:39:33.199690 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.199697 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:33.199702 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:33.199761 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:33.229684 1311248 cri.go:89] found id: ""
	I1218 00:39:33.229698 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.229706 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:33.229711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:33.229771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:33.255931 1311248 cri.go:89] found id: ""
	I1218 00:39:33.255955 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.255963 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:33.255971 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:33.255981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:33.312520 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:33.312538 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:33.327008 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:33.327024 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:33.392853 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:33.392863 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:33.392873 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:33.462852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:33.462872 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:35.991111 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:36.001578 1311248 kubeadm.go:602] duration metric: took 4m4.636770246s to restartPrimaryControlPlane
	W1218 00:39:36.001631 1311248 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1218 00:39:36.001712 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:39:36.428039 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:39:36.441875 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:39:36.449799 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:39:36.449855 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:39:36.457535 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:39:36.457543 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:39:36.457593 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:39:36.465339 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:39:36.465393 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:39:36.472406 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:39:36.480110 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:39:36.480163 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:39:36.487432 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.494964 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:39:36.495019 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.502375 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:39:36.509914 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:39:36.509976 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:39:36.517325 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:39:36.642706 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:39:36.643096 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:39:36.709498 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:43:38.241451 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 00:43:38.241477 1311248 kubeadm.go:319] 
	I1218 00:43:38.241546 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:43:38.245587 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.245639 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.245728 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.245779 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.245813 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.245856 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.245904 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.245947 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.246021 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.246074 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.246124 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.246169 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.246253 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.246316 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.246394 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.246489 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.246578 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.246661 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.249668 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.249761 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.249825 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.249900 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.249985 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.250056 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.250107 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.250167 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.250231 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.250306 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.250386 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.250429 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.250494 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:38.250547 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:38.250611 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:38.250669 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:38.250731 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:38.250784 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:38.250896 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:38.250969 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:38.255653 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:38.255752 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:38.255840 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:38.255905 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:38.256008 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:38.256128 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:38.256248 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:38.256329 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:38.256365 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:38.256499 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:38.256681 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:43:38.256752 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000096267s
	I1218 00:43:38.256755 1311248 kubeadm.go:319] 
	I1218 00:43:38.256814 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:43:38.256853 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:43:38.256963 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:43:38.256967 1311248 kubeadm.go:319] 
	I1218 00:43:38.257093 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:43:38.257126 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:43:38.257155 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:43:38.257212 1311248 kubeadm.go:319] 
	W1218 00:43:38.257278 1311248 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000096267s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 00:43:38.257393 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:43:38.672580 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:43:38.686195 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:43:38.686247 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:43:38.694107 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:43:38.694119 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:43:38.694170 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:43:38.702289 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:43:38.702343 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:43:38.710380 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:43:38.718160 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:43:38.718218 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:43:38.726244 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.734209 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:43:38.734268 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.741907 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:43:38.749716 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:43:38.749773 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:43:38.757471 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:43:38.797919 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.797966 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.877731 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.877795 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.877835 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.877879 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.877926 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.877972 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.878019 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.878065 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.878112 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.878155 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.878202 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.878247 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.941330 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.941446 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.941535 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.951935 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.957317 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.957410 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.957474 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.957580 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.957646 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.957723 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.957784 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.957852 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.957913 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.957987 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.958059 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.958095 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.958151 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:39.202920 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:39.377892 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:39.964483 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:40.103558 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:40.457630 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:40.458383 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:40.462089 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:40.465489 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:40.465583 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:40.465654 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:40.465716 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:40.486385 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:40.486497 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:40.494535 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:40.494848 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:40.495030 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:40.625355 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:40.625497 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:47:40.625149 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000298437s
	I1218 00:47:40.625174 1311248 kubeadm.go:319] 
	I1218 00:47:40.625227 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:47:40.625262 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:47:40.625362 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:47:40.625367 1311248 kubeadm.go:319] 
	I1218 00:47:40.625481 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:47:40.625513 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:47:40.625550 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:47:40.625553 1311248 kubeadm.go:319] 
	I1218 00:47:40.629455 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:47:40.629954 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:47:40.630083 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:47:40.630316 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 00:47:40.630321 1311248 kubeadm.go:319] 
	I1218 00:47:40.630384 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:47:40.630455 1311248 kubeadm.go:403] duration metric: took 12m9.299018648s to StartCluster
	I1218 00:47:40.630487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:47:40.630549 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:47:40.655474 1311248 cri.go:89] found id: ""
	I1218 00:47:40.655489 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.655497 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:47:40.655502 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:47:40.655558 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:47:40.681677 1311248 cri.go:89] found id: ""
	I1218 00:47:40.681692 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.681699 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:47:40.681705 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:47:40.681772 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:47:40.714293 1311248 cri.go:89] found id: ""
	I1218 00:47:40.714307 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.714314 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:47:40.714319 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:47:40.714379 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:47:40.739065 1311248 cri.go:89] found id: ""
	I1218 00:47:40.739089 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.739097 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:47:40.739102 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:47:40.739168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:47:40.763653 1311248 cri.go:89] found id: ""
	I1218 00:47:40.763666 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.763673 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:47:40.763678 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:47:40.763737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:47:40.789038 1311248 cri.go:89] found id: ""
	I1218 00:47:40.789052 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.789059 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:47:40.789065 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:47:40.789124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:47:40.817866 1311248 cri.go:89] found id: ""
	I1218 00:47:40.817880 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.817887 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:47:40.817895 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:47:40.817905 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:47:40.877071 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:47:40.877090 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:47:40.891818 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:47:40.891835 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:47:40.956585 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:47:40.956595 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:47:40.956605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:47:41.023372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:47:41.023390 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 00:47:41.051126 1311248 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 00:47:41.051157 1311248 out.go:285] * 
	W1218 00:47:41.051213 1311248 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.051229 1311248 out.go:285] * 
	W1218 00:47:41.053388 1311248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:47:41.058223 1311248 out.go:203] 
	W1218 00:47:41.061890 1311248 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.061936 1311248 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 00:47:41.061956 1311248 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 00:47:41.065091 1311248 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724217200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724234003Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724272616Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724290872Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724301153Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724312311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724321337Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724338510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724355125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724387017Z" level=info msg="Connect containerd service"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724787739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.725358196Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744687707Z" level=info msg="Start subscribing containerd event"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744774532Z" level=info msg="Start recovering state"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744732367Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.745188078Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.785773770Z" level=info msg="Start event monitor"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.785958718Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786026286Z" level=info msg="Start streaming server"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786098128Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786157901Z" level=info msg="runtime interface starting up..."
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786221604Z" level=info msg="starting plugins..."
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786283461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 00:35:29 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.788365819Z" level=info msg="containerd successfully booted in 0.084734s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:44.541021   21137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:44.541804   21137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:44.543562   21137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:44.544042   21137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:44.545711   21137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:47:44 up  7:30,  0 user,  load average: 0.62, 0.33, 0.47
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 00:47:41 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:41 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:41 functional-232602 kubelet[20919]: E1218 00:47:41.946199   20919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:41 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:42 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 18 00:47:42 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:42 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:42 functional-232602 kubelet[21011]: E1218 00:47:42.728327   21011 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:42 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:42 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:43 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 18 00:47:43 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:43 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:43 functional-232602 kubelet[21032]: E1218 00:47:43.451619   21032 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:43 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:43 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:44 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 18 00:47:44 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:44 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:44 functional-232602 kubelet[21054]: E1218 00:47:44.193113   21054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:44 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:44 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (389.574883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-232602 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-232602 apply -f testdata/invalidsvc.yaml: exit status 1 (58.538591ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-232602 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232602 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232602 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232602 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-232602 --alsologtostderr -v=1] stderr:
I1218 00:49:43.914794 1330040 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:43.914916 1330040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:43.914927 1330040 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:43.914933 1330040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:43.915189 1330040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:43.915468 1330040 mustload.go:66] Loading cluster: functional-232602
I1218 00:49:43.915899 1330040 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:43.916374 1330040 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:43.933250 1330040 host.go:66] Checking if "functional-232602" exists ...
I1218 00:49:43.933565 1330040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1218 00:49:43.993540 1330040 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:49:43.984376948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1218 00:49:43.993668 1330040 api_server.go:166] Checking apiserver status ...
I1218 00:49:43.993729 1330040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1218 00:49:43.993775 1330040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:44.015143 1330040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
W1218 00:49:44.126320 1330040 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1218 00:49:44.129621 1330040 out.go:179] * The control-plane node functional-232602 apiserver is not running: (state=Stopped)
I1218 00:49:44.132702 1330040 out.go:179]   To start a cluster, run: "minikube start -p functional-232602"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (312.546564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-232602 addons list                                                                                                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ addons    │ functional-232602 addons list -o json                                                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ mount     │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001:/mount-9p --alsologtostderr -v=1              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh       │ functional-232602 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh -- ls -la /mount-9p                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh cat /mount-9p/test-1766018977267713907                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh       │ functional-232602 ssh sudo umount -f /mount-9p                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ mount     │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2038120171/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh       │ functional-232602 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh       │ functional-232602 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh -- ls -la /mount-9p                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh sudo umount -f /mount-9p                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ mount     │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount1 --alsologtostderr -v=1                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh       │ functional-232602 ssh findmnt -T /mount1                                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ mount     │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount2 --alsologtostderr -v=1                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ mount     │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount3 --alsologtostderr -v=1                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh       │ functional-232602 ssh findmnt -T /mount2                                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh       │ functional-232602 ssh findmnt -T /mount3                                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ mount     │ -p functional-232602 --kill=true                                                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ start     │ -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1   │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ start     │ -p functional-232602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1             │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ start     │ -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1   │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-232602 --alsologtostderr -v=1                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:49:43
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:49:43.724650 1329993 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:49:43.724800 1329993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.724829 1329993 out.go:374] Setting ErrFile to fd 2...
	I1218 00:49:43.724835 1329993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.725246 1329993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:49:43.725655 1329993 out.go:368] Setting JSON to false
	I1218 00:49:43.726537 1329993 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27130,"bootTime":1765991854,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:49:43.726603 1329993 start.go:143] virtualization:  
	I1218 00:49:43.729825 1329993 out.go:179] * [functional-232602] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1218 00:49:43.732853 1329993 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:49:43.732977 1329993 notify.go:221] Checking for updates...
	I1218 00:49:43.738587 1329993 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:49:43.741453 1329993 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:49:43.744301 1329993 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:49:43.747141 1329993 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:49:43.749958 1329993 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:49:43.753490 1329993 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:49:43.754156 1329993 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:49:43.785304 1329993 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:49:43.785430 1329993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:49:43.841100 1329993 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:49:43.829277142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:49:43.841205 1329993 docker.go:319] overlay module found
	I1218 00:49:43.844333 1329993 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1218 00:49:43.847146 1329993 start.go:309] selected driver: docker
	I1218 00:49:43.847177 1329993 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:49:43.847299 1329993 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:49:43.851013 1329993 out.go:203] 
	W1218 00:49:43.853978 1329993 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 00:49:43.856960 1329993 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.480115132Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.479679470Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.482375935Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.484746123Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.493400844Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.832040692Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.834441140Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.842565463Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.843007052Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.134966568Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.137526298Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.142612413Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.150391104Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.447523093Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.449756341Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.461849843Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.462352304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.465606883Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.468013616Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.471019652Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.479506099Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.295881886Z" level=info msg="No images store for sha256:fbee3dfdb946545a8487e59f5adaf8b308b880e0a9660068998d6d7ea3033fed"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.298353921Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307420645Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307912686Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:49:45.352989   23721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:45.353800   23721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:45.355600   23721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:45.356337   23721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:45.357999   23721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:49:45 up  7:32,  0 user,  load average: 0.57, 0.40, 0.47
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:49:41 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:42 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 18 00:49:42 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:42 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:42 functional-232602 kubelet[23558]: E1218 00:49:42.662049   23558 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:42 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:42 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:43 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 18 00:49:43 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:43 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:43 functional-232602 kubelet[23600]: E1218 00:49:43.454742   23600 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:43 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:43 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:44 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 18 00:49:44 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:44 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:44 functional-232602 kubelet[23615]: E1218 00:49:44.200070   23615 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:44 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:44 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:44 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 18 00:49:44 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:44 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:44 functional-232602 kubelet[23656]: E1218 00:49:44.963948   23656 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:44 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:44 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (334.253741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 status: exit status 2 (341.871999ms)

                                                
                                                
-- stdout --
	functional-232602
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-232602 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (307.692201ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-232602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 status -o json: exit status 2 (320.669173ms)

                                                
                                                
-- stdout --
	{"Name":"functional-232602","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-232602 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (316.763844ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 logs -n 25: (1.010241657s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/1261148.pem                                                                                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /usr/share/ca-certificates/1261148.pem                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image ls                                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image save kicbase/echo-server:functional-232602 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/12611482.pem                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /usr/share/ca-certificates/12611482.pem                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image ls                                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/test/nested/copy/1261148/hosts                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image ls                                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service list                                                                                                                                  │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ image   │ functional-232602 image save --daemon kicbase/echo-server:functional-232602 --alsologtostderr                                                                   │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service list -o json                                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh echo hello                                                                                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service --namespace=default --https --url hello-node                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh cat /etc/hostname                                                                                                                         │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service hello-node --url --format={{.IP}}                                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ tunnel  │ functional-232602 tunnel --alsologtostderr                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ tunnel  │ functional-232602 tunnel --alsologtostderr                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ service │ functional-232602 service hello-node --url                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ tunnel  │ functional-232602 tunnel --alsologtostderr                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ addons  │ functional-232602 addons list                                                                                                                                   │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ addons  │ functional-232602 addons list -o json                                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:35:27.044902 1311248 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:35:27.045002 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045006 1311248 out.go:374] Setting ErrFile to fd 2...
	I1218 00:35:27.045010 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045249 1311248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:35:27.045606 1311248 out.go:368] Setting JSON to false
	I1218 00:35:27.046406 1311248 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26273,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:35:27.046458 1311248 start.go:143] virtualization:  
	I1218 00:35:27.049930 1311248 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:35:27.052925 1311248 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:35:27.053012 1311248 notify.go:221] Checking for updates...
	I1218 00:35:27.058856 1311248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:35:27.061872 1311248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:35:27.064792 1311248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:35:27.067743 1311248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:35:27.070676 1311248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:35:27.074096 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:27.074190 1311248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:35:27.106641 1311248 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:35:27.106748 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.164302 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.154715728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.164392 1311248 docker.go:319] overlay module found
	I1218 00:35:27.167427 1311248 out.go:179] * Using the docker driver based on existing profile
	I1218 00:35:27.170281 1311248 start.go:309] selected driver: docker
	I1218 00:35:27.170292 1311248 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.170444 1311248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:35:27.170546 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.230048 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.221277832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.230469 1311248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 00:35:27.230491 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:27.230542 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:27.230580 1311248 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.235511 1311248 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:35:27.238271 1311248 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:35:27.241192 1311248 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:35:27.243943 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:27.243991 1311248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:35:27.243999 1311248 cache.go:65] Caching tarball of preloaded images
	I1218 00:35:27.244040 1311248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:35:27.244087 1311248 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:35:27.244096 1311248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:35:27.244211 1311248 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:35:27.263574 1311248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:35:27.263584 1311248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:35:27.263598 1311248 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:35:27.263628 1311248 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:35:27.263679 1311248 start.go:364] duration metric: took 35.445µs to acquireMachinesLock for "functional-232602"
	I1218 00:35:27.263697 1311248 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:35:27.263701 1311248 fix.go:54] fixHost starting: 
	I1218 00:35:27.263946 1311248 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:35:27.280222 1311248 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:35:27.280243 1311248 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:35:27.283327 1311248 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:35:27.283352 1311248 machine.go:94] provisionDockerMachine start ...
	I1218 00:35:27.283428 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.299920 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.300231 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.300238 1311248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:35:27.452356 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.452370 1311248 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:35:27.452432 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.473471 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.473816 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.473825 1311248 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:35:27.640067 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.640142 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.667013 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.667323 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.667342 1311248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:35:27.820945 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:35:27.820961 1311248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:35:27.820980 1311248 ubuntu.go:190] setting up certificates
	I1218 00:35:27.820989 1311248 provision.go:84] configureAuth start
	I1218 00:35:27.821051 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:27.838852 1311248 provision.go:143] copyHostCerts
	I1218 00:35:27.838916 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:35:27.838924 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:35:27.838994 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:35:27.839097 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:35:27.839100 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:35:27.839128 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:35:27.839186 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:35:27.839190 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:35:27.839213 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:35:27.839265 1311248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:35:28.109890 1311248 provision.go:177] copyRemoteCerts
	I1218 00:35:28.109947 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:35:28.109996 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.127232 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.232344 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:35:28.250086 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:35:28.268448 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:35:28.286339 1311248 provision.go:87] duration metric: took 465.326862ms to configureAuth
	I1218 00:35:28.286357 1311248 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:35:28.286550 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:28.286556 1311248 machine.go:97] duration metric: took 1.003199883s to provisionDockerMachine
	I1218 00:35:28.286562 1311248 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:35:28.286572 1311248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:35:28.286620 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:35:28.286663 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.304025 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.412869 1311248 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:35:28.416834 1311248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:35:28.416854 1311248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:35:28.416865 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:35:28.416921 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:35:28.417025 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:35:28.417099 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:35:28.417168 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:35:28.424798 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:28.442733 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:35:28.462911 1311248 start.go:296] duration metric: took 176.334186ms for postStartSetup
	I1218 00:35:28.462983 1311248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:35:28.463039 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.480489 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.585769 1311248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:35:28.590837 1311248 fix.go:56] duration metric: took 1.327128154s for fixHost
	I1218 00:35:28.590854 1311248 start.go:83] releasing machines lock for "functional-232602", held for 1.327167711s
	I1218 00:35:28.590944 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:28.607738 1311248 ssh_runner.go:195] Run: cat /version.json
	I1218 00:35:28.607789 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.608049 1311248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:35:28.608095 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.626689 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.634380 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.732432 1311248 ssh_runner.go:195] Run: systemctl --version
	I1218 00:35:28.823477 1311248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 00:35:28.828399 1311248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:35:28.828467 1311248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:35:28.836277 1311248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:35:28.836291 1311248 start.go:496] detecting cgroup driver to use...
	I1218 00:35:28.836322 1311248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:35:28.836377 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:35:28.852038 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:35:28.865568 1311248 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:35:28.865634 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:35:28.881324 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:35:28.894482 1311248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:35:29.019814 1311248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:35:29.139455 1311248 docker.go:234] disabling docker service ...
	I1218 00:35:29.139511 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:35:29.157302 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:35:29.172520 1311248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:35:29.290798 1311248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:35:29.409846 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:35:29.423039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:35:29.438313 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:35:29.447458 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:35:29.457161 1311248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:35:29.457221 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:35:29.466703 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.475761 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:35:29.484925 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.493811 1311248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:35:29.502125 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:35:29.511205 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:35:29.520548 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:35:29.530343 1311248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:35:29.538157 1311248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:35:29.545765 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:29.664409 1311248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:35:29.789454 1311248 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:35:29.789537 1311248 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:35:29.793414 1311248 start.go:564] Will wait 60s for crictl version
	I1218 00:35:29.793467 1311248 ssh_runner.go:195] Run: which crictl
	I1218 00:35:29.796922 1311248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:35:29.821478 1311248 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:35:29.821534 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.845973 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.874969 1311248 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:35:29.877886 1311248 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:35:29.897397 1311248 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:35:29.909164 1311248 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1218 00:35:29.912023 1311248 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:35:29.912156 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:29.912246 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.959601 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.959615 1311248 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:35:29.959670 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.987018 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.987029 1311248 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:35:29.987035 1311248 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:35:29.987151 1311248 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:35:29.987219 1311248 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:35:30.033188 1311248 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1218 00:35:30.033262 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:30.033272 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:30.033285 1311248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:35:30.033322 1311248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:35:30.033459 1311248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:35:30.033555 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:35:30.044133 1311248 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:35:30.044224 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:35:30.053566 1311248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:35:30.069600 1311248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:35:30.086185 1311248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1218 00:35:30.100953 1311248 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:35:30.105204 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:30.229133 1311248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:35:30.643842 1311248 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:35:30.643853 1311248 certs.go:195] generating shared ca certs ...
	I1218 00:35:30.643868 1311248 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:35:30.644040 1311248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:35:30.644079 1311248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:35:30.644085 1311248 certs.go:257] generating profile certs ...
	I1218 00:35:30.644187 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:35:30.644248 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:35:30.644287 1311248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:35:30.644391 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:35:30.644420 1311248 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:35:30.644426 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:35:30.644455 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:35:30.644481 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:35:30.644512 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:35:30.644557 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:30.645271 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:35:30.667963 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:35:30.688789 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:35:30.707638 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:35:30.727172 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:35:30.745582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:35:30.763537 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:35:30.781521 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:35:30.799255 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:35:30.816582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:35:30.835230 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:35:30.852513 1311248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:35:30.865555 1311248 ssh_runner.go:195] Run: openssl version
	I1218 00:35:30.871911 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.879397 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:35:30.886681 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890109 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890169 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.930894 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:35:30.938142 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.945286 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:35:30.952538 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956151 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956245 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.997157 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:35:31.005056 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.014006 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:35:31.022034 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025894 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025961 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.067200 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:35:31.075278 1311248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:35:31.079306 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:35:31.123391 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:35:31.165879 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:35:31.208281 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:35:31.249146 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:35:31.290212 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:35:31.331444 1311248 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:31.331522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:35:31.331580 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.356945 1311248 cri.go:89] found id: ""
	I1218 00:35:31.357003 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:35:31.364788 1311248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:35:31.364798 1311248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:35:31.364876 1311248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:35:31.372428 1311248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.372951 1311248 kubeconfig.go:125] found "functional-232602" server: "https://192.168.49.2:8441"
	I1218 00:35:31.374199 1311248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:35:31.382218 1311248 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-18 00:20:57.479200490 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-18 00:35:30.095938034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1218 00:35:31.382230 1311248 kubeadm.go:1161] stopping kube-system containers ...
	I1218 00:35:31.382240 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1218 00:35:31.382293 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.418635 1311248 cri.go:89] found id: ""
	I1218 00:35:31.418695 1311248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 00:35:31.437319 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:35:31.447695 1311248 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 18 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 18 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 18 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 18 00:25 /etc/kubernetes/scheduler.conf
	
	I1218 00:35:31.447757 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:35:31.455511 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:35:31.463139 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.463194 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:35:31.470550 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.478132 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.478200 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.485959 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:35:31.493702 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.493757 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:35:31.501195 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:35:31.509596 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:31.563212 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:32.882945 1311248 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.319707666s)
	I1218 00:35:32.883005 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.109967 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.178681 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.229970 1311248 api_server.go:52] waiting for apiserver process to appear ...
	I1218 00:35:33.230040 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:33.730927 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.230378 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.730284 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.230343 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.730919 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.730993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.230539 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.731124 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.230838 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.730863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.230678 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.730230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.230236 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.731068 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.231109 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.730288 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.230203 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.730234 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.230141 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.730185 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.231143 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.730804 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.237230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.230803 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.730882 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.230533 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.731147 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.230905 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.730814 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.230754 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.730337 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.230375 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.731190 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.230987 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.731023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.230495 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.730322 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.230929 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.730922 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.231058 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.730458 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.230148 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.230494 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.731136 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.231080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.730219 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.230880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.730261 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.230265 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.730444 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.230228 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.730965 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.231030 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.730793 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.231094 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.730432 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.230277 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.730969 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.230206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.731080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.230777 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.730718 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.231042 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.730199 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.230478 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.730807 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.230613 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.730187 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.231163 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.731095 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.231010 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.731081 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.230167 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.730331 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.230144 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.730362 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.230993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.230791 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.731035 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.230946 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.730274 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.230238 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.730202 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.231089 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.730821 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.230480 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.730348 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.230188 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.730212 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.230315 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.730113 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.231120 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.730951 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.230491 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.730452 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.230231 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.730205 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.230525 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.230233 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.731067 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.231079 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.730956 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.230990 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.730196 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.230863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.730884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.230380 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.730826 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.731192 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.230615 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.730900 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.230553 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.730134 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:33.230238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:33.230314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:33.258458 1311248 cri.go:89] found id: ""
	I1218 00:36:33.258472 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.258484 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:33.258490 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:33.258562 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:33.283965 1311248 cri.go:89] found id: ""
	I1218 00:36:33.283979 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.283986 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:33.283991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:33.284048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:33.308663 1311248 cri.go:89] found id: ""
	I1218 00:36:33.308678 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.308693 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:33.308699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:33.308760 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:33.337762 1311248 cri.go:89] found id: ""
	I1218 00:36:33.337775 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.337783 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:33.337788 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:33.337852 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:33.366489 1311248 cri.go:89] found id: ""
	I1218 00:36:33.366503 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.366510 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:33.366515 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:33.366574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:33.401983 1311248 cri.go:89] found id: ""
	I1218 00:36:33.401998 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.402005 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:33.402010 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:33.402067 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:33.436853 1311248 cri.go:89] found id: ""
	I1218 00:36:33.436867 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.436874 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:33.436883 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:33.436893 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:33.504087 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:33.504097 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:33.504107 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:33.570523 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:33.570549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:33.607484 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:33.607500 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:33.664867 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:33.664884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.181388 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:36.191464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:36.191521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:36.214848 1311248 cri.go:89] found id: ""
	I1218 00:36:36.214863 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.214870 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:36.214876 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:36.214933 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:36.241311 1311248 cri.go:89] found id: ""
	I1218 00:36:36.241324 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.241331 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:36.241336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:36.241394 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:36.265257 1311248 cri.go:89] found id: ""
	I1218 00:36:36.265271 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.265279 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:36.265284 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:36.265343 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:36.288492 1311248 cri.go:89] found id: ""
	I1218 00:36:36.288506 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.288513 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:36.288518 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:36.288574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:36.316558 1311248 cri.go:89] found id: ""
	I1218 00:36:36.316573 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.316580 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:36.316585 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:36.316664 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:36.341952 1311248 cri.go:89] found id: ""
	I1218 00:36:36.341966 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.341973 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:36.341979 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:36.342037 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:36.365945 1311248 cri.go:89] found id: ""
	I1218 00:36:36.365959 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.365966 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:36.365974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:36.365983 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:36.426123 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:36.426142 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.444123 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:36.444140 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:36.509193 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:36.509204 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:36.509214 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:36.571649 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:36.571667 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.103696 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:39.113703 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:39.113762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:39.141856 1311248 cri.go:89] found id: ""
	I1218 00:36:39.141870 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.141878 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:39.141883 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:39.141944 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:39.170038 1311248 cri.go:89] found id: ""
	I1218 00:36:39.170052 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.170101 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:39.170107 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:39.170172 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:39.199014 1311248 cri.go:89] found id: ""
	I1218 00:36:39.199028 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.199035 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:39.199041 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:39.199101 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:39.226392 1311248 cri.go:89] found id: ""
	I1218 00:36:39.226414 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.226422 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:39.226427 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:39.226493 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:39.251905 1311248 cri.go:89] found id: ""
	I1218 00:36:39.251920 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.251927 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:39.251932 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:39.251992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:39.276915 1311248 cri.go:89] found id: ""
	I1218 00:36:39.276937 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.276944 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:39.276949 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:39.277007 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:39.301520 1311248 cri.go:89] found id: ""
	I1218 00:36:39.301534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.301542 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:39.301551 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:39.301560 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:39.364240 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:39.364259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.394082 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:39.394098 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:39.460886 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:39.460907 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:39.477258 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:39.477273 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:39.547172 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.048213 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:42.059442 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:42.059521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:42.095887 1311248 cri.go:89] found id: ""
	I1218 00:36:42.095903 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.095911 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:42.095917 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:42.095987 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:42.126738 1311248 cri.go:89] found id: ""
	I1218 00:36:42.126756 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.126763 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:42.126769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:42.126846 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:42.183895 1311248 cri.go:89] found id: ""
	I1218 00:36:42.183916 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.183924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:42.183931 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:42.184005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:42.217296 1311248 cri.go:89] found id: ""
	I1218 00:36:42.217313 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.217320 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:42.217333 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:42.217410 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:42.248021 1311248 cri.go:89] found id: ""
	I1218 00:36:42.248038 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.248065 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:42.248071 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:42.248143 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:42.278624 1311248 cri.go:89] found id: ""
	I1218 00:36:42.278650 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.278658 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:42.278664 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:42.278732 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:42.306575 1311248 cri.go:89] found id: ""
	I1218 00:36:42.306589 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.306604 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:42.306613 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:42.306622 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:42.366835 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:42.366859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:42.381793 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:42.381810 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:42.478588 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.478598 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:42.478608 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:42.541093 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:42.541114 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:45.069751 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:45.106091 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:45.106161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:45.152078 1311248 cri.go:89] found id: ""
	I1218 00:36:45.152105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.152113 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:45.152120 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:45.152202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:45.228849 1311248 cri.go:89] found id: ""
	I1218 00:36:45.228866 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.228874 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:45.228881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:45.229017 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:45.284605 1311248 cri.go:89] found id: ""
	I1218 00:36:45.284640 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.284648 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:45.284654 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:45.284773 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:45.318439 1311248 cri.go:89] found id: ""
	I1218 00:36:45.318454 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.318461 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:45.318467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:45.318532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:45.348962 1311248 cri.go:89] found id: ""
	I1218 00:36:45.348976 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.348984 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:45.348990 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:45.349055 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:45.378098 1311248 cri.go:89] found id: ""
	I1218 00:36:45.378112 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.378119 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:45.378125 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:45.378227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:45.435291 1311248 cri.go:89] found id: ""
	I1218 00:36:45.435311 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.435318 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:45.435335 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:45.435362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:45.505552 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:45.505571 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:45.523778 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:45.523794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:45.592584 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:45.592594 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:45.592606 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:45.658999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:45.659018 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:48.186749 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:48.197169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:48.197230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:48.222369 1311248 cri.go:89] found id: ""
	I1218 00:36:48.222383 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.222390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:48.222396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:48.222459 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:48.247132 1311248 cri.go:89] found id: ""
	I1218 00:36:48.247146 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.247153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:48.247158 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:48.247217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:48.272441 1311248 cri.go:89] found id: ""
	I1218 00:36:48.272455 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.272462 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:48.272467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:48.272526 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:48.302640 1311248 cri.go:89] found id: ""
	I1218 00:36:48.302655 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.302662 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:48.302679 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:48.302737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:48.329411 1311248 cri.go:89] found id: ""
	I1218 00:36:48.329425 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.329433 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:48.329438 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:48.329497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:48.358419 1311248 cri.go:89] found id: ""
	I1218 00:36:48.358433 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.358440 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:48.358445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:48.358503 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:48.383182 1311248 cri.go:89] found id: ""
	I1218 00:36:48.383195 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.383203 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:48.383210 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:48.383220 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:48.451796 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:48.451815 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:48.467080 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:48.467096 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:48.533083 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:48.533092 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:48.533103 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:48.596920 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:48.596940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:51.124756 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:51.135594 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:51.135659 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:51.164133 1311248 cri.go:89] found id: ""
	I1218 00:36:51.164148 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.164156 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:51.164161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:51.164226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:51.190200 1311248 cri.go:89] found id: ""
	I1218 00:36:51.190215 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.190222 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:51.190228 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:51.190291 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:51.216170 1311248 cri.go:89] found id: ""
	I1218 00:36:51.216187 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.216194 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:51.216200 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:51.216263 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:51.246031 1311248 cri.go:89] found id: ""
	I1218 00:36:51.246045 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.246052 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:51.246058 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:51.246122 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:51.278864 1311248 cri.go:89] found id: ""
	I1218 00:36:51.278878 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.278885 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:51.278890 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:51.278963 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:51.303118 1311248 cri.go:89] found id: ""
	I1218 00:36:51.303132 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.303139 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:51.303144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:51.303202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:51.328091 1311248 cri.go:89] found id: ""
	I1218 00:36:51.328105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.328112 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:51.328120 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:51.328130 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:51.385226 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:51.385249 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:51.400951 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:51.400967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:51.479293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:51.479304 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:51.479315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:51.541268 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:51.541288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.069293 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:54.080067 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:54.080153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:54.106375 1311248 cri.go:89] found id: ""
	I1218 00:36:54.106390 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.106402 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:54.106408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:54.106467 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:54.131767 1311248 cri.go:89] found id: ""
	I1218 00:36:54.131781 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.131788 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:54.131793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:54.131850 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:54.157519 1311248 cri.go:89] found id: ""
	I1218 00:36:54.157534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.157541 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:54.157546 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:54.157606 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:54.182381 1311248 cri.go:89] found id: ""
	I1218 00:36:54.182396 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.182403 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:54.182408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:54.182478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:54.211219 1311248 cri.go:89] found id: ""
	I1218 00:36:54.211234 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.211241 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:54.211247 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:54.211323 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:54.243605 1311248 cri.go:89] found id: ""
	I1218 00:36:54.243627 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.243634 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:54.243640 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:54.243710 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:54.268614 1311248 cri.go:89] found id: ""
	I1218 00:36:54.268648 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.268655 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:54.268664 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:54.268675 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:54.332655 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:54.332668 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:54.332679 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:54.396896 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:54.396916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.440350 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:54.440371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:54.503158 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:54.503178 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.019672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:57.030198 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:57.030268 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:57.059845 1311248 cri.go:89] found id: ""
	I1218 00:36:57.059859 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.059866 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:57.059872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:57.059939 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:57.086203 1311248 cri.go:89] found id: ""
	I1218 00:36:57.086217 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.086224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:57.086229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:57.086326 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:57.115321 1311248 cri.go:89] found id: ""
	I1218 00:36:57.115335 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.115342 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:57.115347 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:57.115416 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:57.141717 1311248 cri.go:89] found id: ""
	I1218 00:36:57.141731 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.141738 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:57.141743 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:57.141801 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:57.166376 1311248 cri.go:89] found id: ""
	I1218 00:36:57.166389 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.166396 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:57.166400 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:57.166470 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:57.194461 1311248 cri.go:89] found id: ""
	I1218 00:36:57.194475 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.194494 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:57.194500 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:57.194557 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:57.219267 1311248 cri.go:89] found id: ""
	I1218 00:36:57.219280 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.219287 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:57.219295 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:57.219305 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:57.274913 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:57.274932 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.290015 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:57.290032 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:57.353493 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:57.353504 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:57.353514 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:57.424372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:57.424400 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:59.955778 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:59.965801 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:59.965861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:59.993708 1311248 cri.go:89] found id: ""
	I1218 00:36:59.993722 1311248 logs.go:282] 0 containers: []
	W1218 00:36:59.993729 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:59.993734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:59.993792 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:00.055250 1311248 cri.go:89] found id: ""
	I1218 00:37:00.055266 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.055274 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:00.055280 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:00.055388 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:00.117792 1311248 cri.go:89] found id: ""
	I1218 00:37:00.117810 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.117818 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:00.117824 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:00.117903 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:00.170362 1311248 cri.go:89] found id: ""
	I1218 00:37:00.170378 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.170394 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:00.170401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:00.170482 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:00.229984 1311248 cri.go:89] found id: ""
	I1218 00:37:00.230002 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.230010 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:00.230015 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:00.230094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:00.264809 1311248 cri.go:89] found id: ""
	I1218 00:37:00.264826 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.264833 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:00.264839 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:00.264908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:00.313700 1311248 cri.go:89] found id: ""
	I1218 00:37:00.313718 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.313725 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:00.313734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:00.313747 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:00.390802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:00.390825 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:00.428189 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:00.428207 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:00.494729 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:00.494750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:00.511226 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:00.511245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:00.579855 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.080114 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:03.090701 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:03.090768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:03.123581 1311248 cri.go:89] found id: ""
	I1218 00:37:03.123596 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.123603 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:03.123608 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:03.123666 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:03.148602 1311248 cri.go:89] found id: ""
	I1218 00:37:03.148615 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.148657 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:03.148662 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:03.148733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:03.174826 1311248 cri.go:89] found id: ""
	I1218 00:37:03.174840 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.174848 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:03.174853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:03.174927 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:03.200912 1311248 cri.go:89] found id: ""
	I1218 00:37:03.200926 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.200933 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:03.200939 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:03.200998 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:03.226151 1311248 cri.go:89] found id: ""
	I1218 00:37:03.226166 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.226173 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:03.226179 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:03.226237 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:03.253785 1311248 cri.go:89] found id: ""
	I1218 00:37:03.253799 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.253806 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:03.253812 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:03.253878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:03.279482 1311248 cri.go:89] found id: ""
	I1218 00:37:03.279495 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.279502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:03.279510 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:03.279521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:03.294545 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:03.294563 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:03.360050 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.360059 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:03.360071 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:03.423132 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:03.423151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:03.461805 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:03.461820 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.018802 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:06.030336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:06.030406 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:06.056426 1311248 cri.go:89] found id: ""
	I1218 00:37:06.056440 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.056447 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:06.056453 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:06.056513 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:06.086319 1311248 cri.go:89] found id: ""
	I1218 00:37:06.086333 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.086341 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:06.086346 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:06.086413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:06.112062 1311248 cri.go:89] found id: ""
	I1218 00:37:06.112077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.112084 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:06.112089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:06.112157 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:06.137317 1311248 cri.go:89] found id: ""
	I1218 00:37:06.137331 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.137344 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:06.137351 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:06.137419 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:06.165090 1311248 cri.go:89] found id: ""
	I1218 00:37:06.165104 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.165111 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:06.165116 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:06.165174 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:06.190738 1311248 cri.go:89] found id: ""
	I1218 00:37:06.190753 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.190759 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:06.190765 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:06.190822 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:06.215038 1311248 cri.go:89] found id: ""
	I1218 00:37:06.215066 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.215075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:06.215083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:06.215094 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.270893 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:06.270915 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:06.285817 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:06.285834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:06.354768 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:06.354777 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:06.354787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:06.416937 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:06.416957 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:08.951149 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:08.961238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:08.961297 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:08.985900 1311248 cri.go:89] found id: ""
	I1218 00:37:08.985916 1311248 logs.go:282] 0 containers: []
	W1218 00:37:08.985923 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:08.985928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:08.985993 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:09.016022 1311248 cri.go:89] found id: ""
	I1218 00:37:09.016036 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.016043 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:09.016048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:09.016106 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:09.040820 1311248 cri.go:89] found id: ""
	I1218 00:37:09.040841 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.040849 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:09.040853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:09.040912 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:09.065452 1311248 cri.go:89] found id: ""
	I1218 00:37:09.065466 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.065473 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:09.065478 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:09.065539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:09.095062 1311248 cri.go:89] found id: ""
	I1218 00:37:09.095077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.095083 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:09.095089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:09.095151 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:09.120274 1311248 cri.go:89] found id: ""
	I1218 00:37:09.120287 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.120294 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:09.120300 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:09.120366 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:09.144652 1311248 cri.go:89] found id: ""
	I1218 00:37:09.144667 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.144674 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:09.144683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:09.144700 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:09.159355 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:09.159371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:09.224560 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:09.224571 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:09.224582 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:09.286931 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:09.286951 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:09.318873 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:09.318888 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:11.876699 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:11.887524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:11.887583 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:11.913617 1311248 cri.go:89] found id: ""
	I1218 00:37:11.913631 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.913638 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:11.913643 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:11.913701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:11.942203 1311248 cri.go:89] found id: ""
	I1218 00:37:11.942219 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.942226 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:11.942231 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:11.942292 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:11.967671 1311248 cri.go:89] found id: ""
	I1218 00:37:11.967685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.967692 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:11.967697 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:11.967766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:11.992422 1311248 cri.go:89] found id: ""
	I1218 00:37:11.992437 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.992443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:11.992448 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:11.992505 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:12.031034 1311248 cri.go:89] found id: ""
	I1218 00:37:12.031049 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.031056 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:12.031061 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:12.031119 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:12.057654 1311248 cri.go:89] found id: ""
	I1218 00:37:12.057669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.057677 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:12.057682 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:12.057764 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:12.082063 1311248 cri.go:89] found id: ""
	I1218 00:37:12.082078 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.082084 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:12.082092 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:12.082102 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:12.111103 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:12.111119 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:12.168426 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:12.168446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:12.183407 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:12.183423 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:12.251784 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:12.251803 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:12.251814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:14.823080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:14.834459 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:14.834525 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:14.860258 1311248 cri.go:89] found id: ""
	I1218 00:37:14.860272 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.860278 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:14.860283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:14.860341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:14.884703 1311248 cri.go:89] found id: ""
	I1218 00:37:14.884722 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.884729 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:14.884734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:14.884794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:14.909031 1311248 cri.go:89] found id: ""
	I1218 00:37:14.909046 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.909054 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:14.909059 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:14.909130 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:14.934504 1311248 cri.go:89] found id: ""
	I1218 00:37:14.934518 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.934525 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:14.934531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:14.934590 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:14.965623 1311248 cri.go:89] found id: ""
	I1218 00:37:14.965638 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.965646 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:14.965651 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:14.965718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:14.991607 1311248 cri.go:89] found id: ""
	I1218 00:37:14.991623 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.991631 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:14.991636 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:14.991711 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:15.027331 1311248 cri.go:89] found id: ""
	I1218 00:37:15.027347 1311248 logs.go:282] 0 containers: []
	W1218 00:37:15.027355 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:15.027364 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:15.027376 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:15.102509 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:15.102519 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:15.102530 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:15.167080 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:15.167101 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:15.200488 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:15.200504 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:15.261320 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:15.261342 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:17.777092 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:17.788005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:17.788070 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:17.813820 1311248 cri.go:89] found id: ""
	I1218 00:37:17.813834 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.813841 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:17.813846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:17.813906 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:17.841574 1311248 cri.go:89] found id: ""
	I1218 00:37:17.841588 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.841605 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:17.841610 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:17.841679 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:17.865628 1311248 cri.go:89] found id: ""
	I1218 00:37:17.865644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.865650 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:17.865656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:17.865713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:17.891259 1311248 cri.go:89] found id: ""
	I1218 00:37:17.891273 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.891289 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:17.891295 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:17.891363 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:17.918377 1311248 cri.go:89] found id: ""
	I1218 00:37:17.918391 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.918398 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:17.918403 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:17.918461 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:17.948139 1311248 cri.go:89] found id: ""
	I1218 00:37:17.948171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.948178 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:17.948183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:17.948251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:17.971855 1311248 cri.go:89] found id: ""
	I1218 00:37:17.971869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.971876 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:17.971884 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:17.971894 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:18.026594 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:18.026614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:18.042303 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:18.042328 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:18.108683 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:18.108704 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:18.108729 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:18.172657 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:18.172676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:20.704818 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:20.715060 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:20.715120 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:20.741147 1311248 cri.go:89] found id: ""
	I1218 00:37:20.741161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.741168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:20.741174 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:20.741231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:20.765846 1311248 cri.go:89] found id: ""
	I1218 00:37:20.765860 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.765867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:20.765872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:20.765930 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:20.795338 1311248 cri.go:89] found id: ""
	I1218 00:37:20.795351 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.795358 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:20.795364 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:20.795421 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:20.823054 1311248 cri.go:89] found id: ""
	I1218 00:37:20.823068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.823075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:20.823080 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:20.823137 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:20.848186 1311248 cri.go:89] found id: ""
	I1218 00:37:20.848200 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.848208 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:20.848213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:20.848278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:20.872642 1311248 cri.go:89] found id: ""
	I1218 00:37:20.872656 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.872662 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:20.872668 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:20.872771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:20.897151 1311248 cri.go:89] found id: ""
	I1218 00:37:20.897165 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.897172 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:20.897180 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:20.897190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:20.951948 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:20.951968 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:20.966927 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:20.966943 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:21.033275 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:21.033286 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:21.033296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:21.096425 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:21.096445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.624716 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:23.635084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:23.635160 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:23.668648 1311248 cri.go:89] found id: ""
	I1218 00:37:23.668662 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.668670 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:23.668675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:23.668755 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:23.700454 1311248 cri.go:89] found id: ""
	I1218 00:37:23.700468 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.700475 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:23.700480 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:23.700538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:23.732021 1311248 cri.go:89] found id: ""
	I1218 00:37:23.732035 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.732043 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:23.732048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:23.732124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:23.760854 1311248 cri.go:89] found id: ""
	I1218 00:37:23.760868 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.760875 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:23.760881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:23.760942 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:23.786164 1311248 cri.go:89] found id: ""
	I1218 00:37:23.786178 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.786185 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:23.786189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:23.786248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:23.811196 1311248 cri.go:89] found id: ""
	I1218 00:37:23.811220 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.811229 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:23.811234 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:23.811300 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:23.835282 1311248 cri.go:89] found id: ""
	I1218 00:37:23.835297 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.835314 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:23.835323 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:23.835334 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:23.899950 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:23.899970 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:23.899981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:23.966454 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:23.966474 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.994564 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:23.994580 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:24.052734 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:24.052755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.568298 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:26.578561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:26.578622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:26.602733 1311248 cri.go:89] found id: ""
	I1218 00:37:26.602747 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.602755 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:26.602761 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:26.602826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:26.631092 1311248 cri.go:89] found id: ""
	I1218 00:37:26.631106 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.631113 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:26.631118 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:26.631180 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:26.677513 1311248 cri.go:89] found id: ""
	I1218 00:37:26.677528 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.677536 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:26.677541 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:26.677608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:26.712071 1311248 cri.go:89] found id: ""
	I1218 00:37:26.712085 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.712093 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:26.712100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:26.712167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:26.738769 1311248 cri.go:89] found id: ""
	I1218 00:37:26.738783 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.738790 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:26.738795 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:26.738857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:26.764344 1311248 cri.go:89] found id: ""
	I1218 00:37:26.764358 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.764365 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:26.764370 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:26.764428 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:26.790276 1311248 cri.go:89] found id: ""
	I1218 00:37:26.790290 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.790297 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:26.790305 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:26.790315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:26.845607 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:26.845626 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.861063 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:26.861080 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:26.931574 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:26.931584 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:26.931595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:26.998426 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:26.998445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:29.540997 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:29.551044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:29.551103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:29.575146 1311248 cri.go:89] found id: ""
	I1218 00:37:29.575161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.575168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:29.575173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:29.575230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:29.599039 1311248 cri.go:89] found id: ""
	I1218 00:37:29.599052 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.599059 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:29.599064 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:29.599123 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:29.623971 1311248 cri.go:89] found id: ""
	I1218 00:37:29.623985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.623993 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:29.623998 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:29.624057 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:29.653653 1311248 cri.go:89] found id: ""
	I1218 00:37:29.653669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.653675 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:29.653681 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:29.653754 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:29.687572 1311248 cri.go:89] found id: ""
	I1218 00:37:29.687586 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.687593 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:29.687599 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:29.687670 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:29.725789 1311248 cri.go:89] found id: ""
	I1218 00:37:29.725803 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.725811 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:29.725816 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:29.725878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:29.753212 1311248 cri.go:89] found id: ""
	I1218 00:37:29.753226 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.753233 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:29.753241 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:29.753253 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:29.810976 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:29.810996 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:29.825952 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:29.825969 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:29.893717 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:29.893736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:29.893748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:29.959773 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:29.959794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:32.492460 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:32.502745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:32.502807 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:32.528416 1311248 cri.go:89] found id: ""
	I1218 00:37:32.528431 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.528438 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:32.528443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:32.528501 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:32.553770 1311248 cri.go:89] found id: ""
	I1218 00:37:32.553785 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.553792 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:32.553798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:32.553861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:32.577941 1311248 cri.go:89] found id: ""
	I1218 00:37:32.577956 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.577963 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:32.577969 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:32.578028 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:32.604043 1311248 cri.go:89] found id: ""
	I1218 00:37:32.604058 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.604075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:32.604081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:32.604159 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:32.629080 1311248 cri.go:89] found id: ""
	I1218 00:37:32.629095 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.629102 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:32.629108 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:32.629167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:32.664156 1311248 cri.go:89] found id: ""
	I1218 00:37:32.664171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.664187 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:32.664193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:32.664281 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:32.692107 1311248 cri.go:89] found id: ""
	I1218 00:37:32.692141 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.692149 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:32.692158 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:32.692168 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:32.758211 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:32.758238 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:32.774028 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:32.774047 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:32.839724 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:32.839734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:32.839749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:32.905609 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:32.905633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:35.434204 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:35.445035 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:35.445099 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:35.470531 1311248 cri.go:89] found id: ""
	I1218 00:37:35.470545 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.470553 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:35.470558 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:35.470621 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:35.494976 1311248 cri.go:89] found id: ""
	I1218 00:37:35.494990 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.494996 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:35.495001 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:35.495063 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:35.519629 1311248 cri.go:89] found id: ""
	I1218 00:37:35.519644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.519651 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:35.519656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:35.519714 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:35.544438 1311248 cri.go:89] found id: ""
	I1218 00:37:35.544453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.544460 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:35.544465 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:35.544523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:35.569684 1311248 cri.go:89] found id: ""
	I1218 00:37:35.569699 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.569706 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:35.569712 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:35.569771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:35.595541 1311248 cri.go:89] found id: ""
	I1218 00:37:35.595556 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.595563 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:35.595568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:35.595632 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:35.620307 1311248 cri.go:89] found id: ""
	I1218 00:37:35.620321 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.620328 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:35.620336 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:35.620346 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:35.678927 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:35.678945 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:35.697469 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:35.697488 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:35.774692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:35.774703 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:35.774713 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:35.836772 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:35.836792 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:38.369786 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:38.380243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:38.380304 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:38.406412 1311248 cri.go:89] found id: ""
	I1218 00:37:38.406426 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.406433 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:38.406439 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:38.406497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:38.431433 1311248 cri.go:89] found id: ""
	I1218 00:37:38.431447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.431454 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:38.431460 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:38.431518 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:38.455854 1311248 cri.go:89] found id: ""
	I1218 00:37:38.455869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.455876 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:38.455881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:38.455943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:38.480414 1311248 cri.go:89] found id: ""
	I1218 00:37:38.480428 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.480435 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:38.480440 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:38.480497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:38.506521 1311248 cri.go:89] found id: ""
	I1218 00:37:38.506535 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.506551 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:38.506557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:38.506630 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:38.531738 1311248 cri.go:89] found id: ""
	I1218 00:37:38.531762 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.531769 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:38.531774 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:38.531840 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:38.557054 1311248 cri.go:89] found id: ""
	I1218 00:37:38.557068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.557075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:38.557083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:38.557092 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:38.613102 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:38.613120 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:38.627653 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:38.627670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:38.723568 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:38.723579 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:38.723591 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:38.784988 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:38.785008 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:41.315880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:41.326378 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:41.326457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:41.351366 1311248 cri.go:89] found id: ""
	I1218 00:37:41.351381 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.351390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:41.351395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:41.351454 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:41.376110 1311248 cri.go:89] found id: ""
	I1218 00:37:41.376124 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.376131 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:41.376137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:41.376192 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:41.401062 1311248 cri.go:89] found id: ""
	I1218 00:37:41.401075 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.401082 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:41.401087 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:41.401146 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:41.425454 1311248 cri.go:89] found id: ""
	I1218 00:37:41.425469 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.425475 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:41.425481 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:41.425539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:41.454711 1311248 cri.go:89] found id: ""
	I1218 00:37:41.454724 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.454732 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:41.454737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:41.454799 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:41.479667 1311248 cri.go:89] found id: ""
	I1218 00:37:41.479681 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.479688 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:41.479694 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:41.479752 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:41.504248 1311248 cri.go:89] found id: ""
	I1218 00:37:41.504261 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.504268 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:41.504276 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:41.504323 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:41.559589 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:41.559609 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:41.574018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:41.574034 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:41.637175 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:41.637186 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:41.637196 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:41.712099 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:41.712122 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.243063 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:44.253213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:44.253272 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:44.278124 1311248 cri.go:89] found id: ""
	I1218 00:37:44.278138 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.278145 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:44.278150 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:44.278211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:44.302729 1311248 cri.go:89] found id: ""
	I1218 00:37:44.302743 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.302750 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:44.302755 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:44.302813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:44.327369 1311248 cri.go:89] found id: ""
	I1218 00:37:44.327384 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.327391 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:44.327396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:44.327458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:44.351769 1311248 cri.go:89] found id: ""
	I1218 00:37:44.351784 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.351791 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:44.351796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:44.351858 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:44.378488 1311248 cri.go:89] found id: ""
	I1218 00:37:44.378502 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.378509 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:44.378514 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:44.378574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:44.404134 1311248 cri.go:89] found id: ""
	I1218 00:37:44.404149 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.404156 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:44.404161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:44.404219 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:44.428529 1311248 cri.go:89] found id: ""
	I1218 00:37:44.428543 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.428551 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:44.428559 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:44.428570 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:44.443196 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:44.443212 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:44.505692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:44.505702 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:44.505712 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:44.571665 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:44.571686 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.600535 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:44.600553 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.157844 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:47.168414 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:47.168474 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:47.197971 1311248 cri.go:89] found id: ""
	I1218 00:37:47.197985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.197992 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:47.197997 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:47.198054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:47.223237 1311248 cri.go:89] found id: ""
	I1218 00:37:47.223251 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.223258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:47.223263 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:47.223322 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:47.251998 1311248 cri.go:89] found id: ""
	I1218 00:37:47.252018 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.252025 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:47.252031 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:47.252089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:47.275741 1311248 cri.go:89] found id: ""
	I1218 00:37:47.275755 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.275764 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:47.275769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:47.275826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:47.302583 1311248 cri.go:89] found id: ""
	I1218 00:37:47.302597 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.302604 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:47.302609 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:47.302665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:47.327501 1311248 cri.go:89] found id: ""
	I1218 00:37:47.327516 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.327523 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:47.327528 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:47.327594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:47.352433 1311248 cri.go:89] found id: ""
	I1218 00:37:47.352447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.352454 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:47.352463 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:47.352473 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.410340 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:47.410362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:47.425365 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:47.425388 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:47.492532 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:47.492542 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:47.492562 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:47.553805 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:47.553828 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.086246 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:50.097136 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:50.097206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:50.124671 1311248 cri.go:89] found id: ""
	I1218 00:37:50.124685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.124693 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:50.124698 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:50.124766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:50.150439 1311248 cri.go:89] found id: ""
	I1218 00:37:50.150453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.150460 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:50.150464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:50.150523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:50.174899 1311248 cri.go:89] found id: ""
	I1218 00:37:50.174913 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.174921 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:50.174926 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:50.174992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:50.200398 1311248 cri.go:89] found id: ""
	I1218 00:37:50.200412 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.200420 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:50.200425 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:50.200486 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:50.226325 1311248 cri.go:89] found id: ""
	I1218 00:37:50.226338 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.226345 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:50.226350 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:50.226409 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:50.251194 1311248 cri.go:89] found id: ""
	I1218 00:37:50.251208 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.251215 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:50.251220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:50.251287 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:50.278029 1311248 cri.go:89] found id: ""
	I1218 00:37:50.278043 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.278050 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:50.278057 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:50.278067 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:50.338421 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:50.338443 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.368542 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:50.368565 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:50.423715 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:50.423734 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:50.438292 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:50.438308 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:50.499550 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:52.999811 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:53.011389 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:53.011453 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:53.036842 1311248 cri.go:89] found id: ""
	I1218 00:37:53.036861 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.036869 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:53.036884 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:53.036981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:53.069368 1311248 cri.go:89] found id: ""
	I1218 00:37:53.069383 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.069391 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:53.069397 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:53.069458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:53.093990 1311248 cri.go:89] found id: ""
	I1218 00:37:53.094004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.094011 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:53.094016 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:53.094076 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:53.119386 1311248 cri.go:89] found id: ""
	I1218 00:37:53.119400 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.119417 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:53.119423 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:53.119487 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:53.144979 1311248 cri.go:89] found id: ""
	I1218 00:37:53.144992 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.144999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:53.145005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:53.145062 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:53.171485 1311248 cri.go:89] found id: ""
	I1218 00:37:53.171499 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.171506 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:53.171512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:53.171570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:53.198517 1311248 cri.go:89] found id: ""
	I1218 00:37:53.198530 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.198537 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:53.198545 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:53.198556 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:53.225701 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:53.225719 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:53.280281 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:53.280300 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:53.295217 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:53.295235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:53.360920 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:53.360930 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:53.360940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:55.923673 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:55.935823 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:55.935880 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:55.963196 1311248 cri.go:89] found id: ""
	I1218 00:37:55.963210 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.963217 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:55.963222 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:55.963278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:55.992688 1311248 cri.go:89] found id: ""
	I1218 00:37:55.992701 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.992708 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:55.992713 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:55.992778 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:56.032683 1311248 cri.go:89] found id: ""
	I1218 00:37:56.032696 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.032705 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:56.032711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:56.032779 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:56.061554 1311248 cri.go:89] found id: ""
	I1218 00:37:56.061568 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.061575 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:56.061580 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:56.061639 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:56.090855 1311248 cri.go:89] found id: ""
	I1218 00:37:56.090869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.090877 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:56.090882 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:56.090943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:56.115990 1311248 cri.go:89] found id: ""
	I1218 00:37:56.116004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.116020 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:56.116026 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:56.116085 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:56.141361 1311248 cri.go:89] found id: ""
	I1218 00:37:56.141385 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.141393 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:56.141401 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:56.141412 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:56.202998 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:56.203008 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:56.203019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:56.263974 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:56.263994 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:56.295494 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:56.295509 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:56.350431 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:56.350450 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:58.867454 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:58.877799 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:58.877861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:58.929615 1311248 cri.go:89] found id: ""
	I1218 00:37:58.929629 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.929636 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:58.929642 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:58.929701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:58.958880 1311248 cri.go:89] found id: ""
	I1218 00:37:58.958894 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.958900 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:58.958906 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:58.958965 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:58.983460 1311248 cri.go:89] found id: ""
	I1218 00:37:58.983475 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.983482 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:58.983487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:58.983547 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:59.009476 1311248 cri.go:89] found id: ""
	I1218 00:37:59.009490 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.009497 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:59.009503 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:59.009563 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:59.033436 1311248 cri.go:89] found id: ""
	I1218 00:37:59.033450 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.033457 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:59.033462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:59.033522 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:59.058635 1311248 cri.go:89] found id: ""
	I1218 00:37:59.058649 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.058656 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:59.058661 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:59.058719 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:59.082644 1311248 cri.go:89] found id: ""
	I1218 00:37:59.082658 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.082666 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:59.082673 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:59.082684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:59.138067 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:59.138085 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:59.154868 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:59.154884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:59.232032 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:59.232043 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:59.232061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:59.297264 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:59.297288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:01.827672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:01.838270 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:01.838330 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:01.862836 1311248 cri.go:89] found id: ""
	I1218 00:38:01.862855 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.862862 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:01.862867 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:01.862925 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:01.892782 1311248 cri.go:89] found id: ""
	I1218 00:38:01.892797 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.892804 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:01.892810 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:01.892876 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:01.919043 1311248 cri.go:89] found id: ""
	I1218 00:38:01.919068 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.919076 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:01.919081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:01.919148 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:01.945252 1311248 cri.go:89] found id: ""
	I1218 00:38:01.945267 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.945285 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:01.945291 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:01.945368 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:01.974338 1311248 cri.go:89] found id: ""
	I1218 00:38:01.974353 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.974361 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:01.974366 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:01.974433 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:02.003307 1311248 cri.go:89] found id: ""
	I1218 00:38:02.003324 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.003332 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:02.003339 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:02.003423 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:02.030938 1311248 cri.go:89] found id: ""
	I1218 00:38:02.030953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.030960 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:02.030968 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:02.030979 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:02.100511 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:02.100521 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:02.100531 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:02.162112 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:02.162132 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:02.191957 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:02.191976 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:02.248095 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:02.248116 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:04.765008 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:04.775100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:04.775168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:04.799097 1311248 cri.go:89] found id: ""
	I1218 00:38:04.799125 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.799132 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:04.799137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:04.799206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:04.826968 1311248 cri.go:89] found id: ""
	I1218 00:38:04.826993 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.827000 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:04.827005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:04.827083 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:04.860005 1311248 cri.go:89] found id: ""
	I1218 00:38:04.860020 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.860027 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:04.860032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:04.860103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:04.886293 1311248 cri.go:89] found id: ""
	I1218 00:38:04.886307 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.886315 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:04.886320 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:04.886385 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:04.918579 1311248 cri.go:89] found id: ""
	I1218 00:38:04.918594 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.918601 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:04.918607 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:04.918676 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:04.945152 1311248 cri.go:89] found id: ""
	I1218 00:38:04.945167 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.945183 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:04.945189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:04.945258 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:04.976410 1311248 cri.go:89] found id: ""
	I1218 00:38:04.976424 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.976432 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:04.976439 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:04.976449 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:05.032080 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:05.032100 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:05.047379 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:05.047396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:05.113965 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:05.113975 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:05.113986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:05.174878 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:05.174897 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:07.706926 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:07.717077 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:07.717140 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:07.741430 1311248 cri.go:89] found id: ""
	I1218 00:38:07.741464 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.741471 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:07.741477 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:07.741538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:07.766770 1311248 cri.go:89] found id: ""
	I1218 00:38:07.766784 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.766791 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:07.766796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:07.766855 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:07.790902 1311248 cri.go:89] found id: ""
	I1218 00:38:07.790917 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.790924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:07.790929 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:07.791005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:07.819681 1311248 cri.go:89] found id: ""
	I1218 00:38:07.819696 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.819703 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:07.819708 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:07.819770 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:07.844498 1311248 cri.go:89] found id: ""
	I1218 00:38:07.844512 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.844519 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:07.844524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:07.844584 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:07.870028 1311248 cri.go:89] found id: ""
	I1218 00:38:07.870043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.870050 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:07.870057 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:07.870125 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:07.906969 1311248 cri.go:89] found id: ""
	I1218 00:38:07.906984 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.906999 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:07.907007 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:07.907017 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:07.974278 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:07.974306 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:07.989533 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:07.989551 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:08.055867 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:08.055877 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:08.055889 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:08.118669 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:08.118693 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:10.651292 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:10.663394 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:10.663471 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:10.687520 1311248 cri.go:89] found id: ""
	I1218 00:38:10.687534 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.687542 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:10.687547 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:10.687608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:10.713147 1311248 cri.go:89] found id: ""
	I1218 00:38:10.713161 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.713168 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:10.713173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:10.713231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:10.737926 1311248 cri.go:89] found id: ""
	I1218 00:38:10.737940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.737948 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:10.737953 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:10.738012 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:10.763422 1311248 cri.go:89] found id: ""
	I1218 00:38:10.763436 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.763443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:10.763449 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:10.763508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:10.788619 1311248 cri.go:89] found id: ""
	I1218 00:38:10.788659 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.788672 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:10.788677 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:10.788738 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:10.813718 1311248 cri.go:89] found id: ""
	I1218 00:38:10.813732 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.813740 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:10.813745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:10.813803 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:10.837575 1311248 cri.go:89] found id: ""
	I1218 00:38:10.837588 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.837595 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:10.837603 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:10.837614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:10.852133 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:10.852149 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:10.917780 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:10.917791 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:10.917801 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:10.987674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:10.987695 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:11.024530 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:11.024549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.581947 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:13.592491 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:13.592556 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:13.617579 1311248 cri.go:89] found id: ""
	I1218 00:38:13.617593 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.617600 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:13.617605 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:13.617665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:13.641975 1311248 cri.go:89] found id: ""
	I1218 00:38:13.641990 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.641997 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:13.642002 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:13.642060 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:13.667128 1311248 cri.go:89] found id: ""
	I1218 00:38:13.667142 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.667149 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:13.667154 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:13.667215 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:13.699564 1311248 cri.go:89] found id: ""
	I1218 00:38:13.699579 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.699586 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:13.699591 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:13.699655 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:13.727620 1311248 cri.go:89] found id: ""
	I1218 00:38:13.727634 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.727641 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:13.727646 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:13.727703 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:13.756118 1311248 cri.go:89] found id: ""
	I1218 00:38:13.756132 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.756138 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:13.756144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:13.756204 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:13.780706 1311248 cri.go:89] found id: ""
	I1218 00:38:13.780720 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.780728 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:13.780736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:13.780746 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:13.842845 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:13.842864 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:13.871826 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:13.871843 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.932300 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:13.932319 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:13.950089 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:13.950106 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:14.022114 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.522391 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:16.534271 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:16.534357 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:16.558729 1311248 cri.go:89] found id: ""
	I1218 00:38:16.558743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.558757 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:16.558762 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:16.558819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:16.587758 1311248 cri.go:89] found id: ""
	I1218 00:38:16.587772 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.587779 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:16.587784 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:16.587841 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:16.612793 1311248 cri.go:89] found id: ""
	I1218 00:38:16.612807 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.612814 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:16.612819 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:16.612907 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:16.637417 1311248 cri.go:89] found id: ""
	I1218 00:38:16.637431 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.637438 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:16.637443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:16.637508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:16.662059 1311248 cri.go:89] found id: ""
	I1218 00:38:16.662073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.662080 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:16.662085 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:16.662141 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:16.686710 1311248 cri.go:89] found id: ""
	I1218 00:38:16.686724 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.686731 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:16.686737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:16.686794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:16.711539 1311248 cri.go:89] found id: ""
	I1218 00:38:16.711553 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.711561 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:16.711569 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:16.711579 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:16.739136 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:16.739151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:16.794672 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:16.794694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:16.809147 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:16.809171 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:16.878702 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.878711 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:16.878723 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.444575 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:19.454827 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:19.454887 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:19.482057 1311248 cri.go:89] found id: ""
	I1218 00:38:19.482071 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.482078 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:19.482083 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:19.482142 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:19.505124 1311248 cri.go:89] found id: ""
	I1218 00:38:19.505138 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.505146 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:19.505151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:19.505209 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:19.530010 1311248 cri.go:89] found id: ""
	I1218 00:38:19.530024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.530031 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:19.530037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:19.530094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:19.555994 1311248 cri.go:89] found id: ""
	I1218 00:38:19.556008 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.556025 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:19.556030 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:19.556087 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:19.580515 1311248 cri.go:89] found id: ""
	I1218 00:38:19.580539 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.580546 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:19.580554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:19.580619 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:19.605333 1311248 cri.go:89] found id: ""
	I1218 00:38:19.605348 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.605354 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:19.605360 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:19.605418 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:19.630483 1311248 cri.go:89] found id: ""
	I1218 00:38:19.630497 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.630504 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:19.630512 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:19.630522 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:19.693128 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:19.693138 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:19.693148 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.755570 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:19.755590 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:19.785139 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:19.785156 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:19.842579 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:19.842605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.358338 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:22.368724 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:22.368793 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:22.392394 1311248 cri.go:89] found id: ""
	I1218 00:38:22.392408 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.392415 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:22.392420 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:22.392478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:22.419029 1311248 cri.go:89] found id: ""
	I1218 00:38:22.419043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.419050 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:22.419055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:22.419117 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:22.443838 1311248 cri.go:89] found id: ""
	I1218 00:38:22.443852 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.443859 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:22.443864 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:22.443923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:22.467780 1311248 cri.go:89] found id: ""
	I1218 00:38:22.467794 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.467801 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:22.467807 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:22.467864 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:22.497254 1311248 cri.go:89] found id: ""
	I1218 00:38:22.497268 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.497276 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:22.497281 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:22.497340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:22.521672 1311248 cri.go:89] found id: ""
	I1218 00:38:22.521686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.521693 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:22.521699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:22.521758 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:22.548085 1311248 cri.go:89] found id: ""
	I1218 00:38:22.548119 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.548126 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:22.548134 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:22.548144 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:22.614828 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:22.614852 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:22.643447 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:22.643462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:22.698947 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:22.698967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.713971 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:22.713986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:22.789955 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.290158 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:25.300164 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:25.300226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:25.323897 1311248 cri.go:89] found id: ""
	I1218 00:38:25.323912 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.323919 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:25.323924 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:25.323985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:25.352232 1311248 cri.go:89] found id: ""
	I1218 00:38:25.352245 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.352252 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:25.352257 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:25.352314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:25.376749 1311248 cri.go:89] found id: ""
	I1218 00:38:25.376785 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.376792 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:25.376797 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:25.376868 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:25.401002 1311248 cri.go:89] found id: ""
	I1218 00:38:25.401015 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.401023 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:25.401028 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:25.401089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:25.426497 1311248 cri.go:89] found id: ""
	I1218 00:38:25.426510 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.426517 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:25.426522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:25.426579 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:25.450505 1311248 cri.go:89] found id: ""
	I1218 00:38:25.450518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.450525 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:25.450536 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:25.450593 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:25.478999 1311248 cri.go:89] found id: ""
	I1218 00:38:25.479013 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.479029 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:25.479037 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:25.479048 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:25.540968 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.540977 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:25.540987 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:25.601527 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:25.601546 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:25.633804 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:25.633826 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:25.691056 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:25.691076 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.206639 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:28.217134 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:28.217198 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:28.242357 1311248 cri.go:89] found id: ""
	I1218 00:38:28.242372 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.242378 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:28.242384 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:28.242449 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:28.271155 1311248 cri.go:89] found id: ""
	I1218 00:38:28.271169 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.271176 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:28.271181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:28.271242 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:28.296330 1311248 cri.go:89] found id: ""
	I1218 00:38:28.296345 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.296352 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:28.296357 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:28.296413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:28.320425 1311248 cri.go:89] found id: ""
	I1218 00:38:28.320449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.320456 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:28.320461 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:28.320528 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:28.345590 1311248 cri.go:89] found id: ""
	I1218 00:38:28.345603 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.345610 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:28.345625 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:28.345688 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:28.374296 1311248 cri.go:89] found id: ""
	I1218 00:38:28.374310 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.374334 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:28.374340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:28.374407 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:28.397991 1311248 cri.go:89] found id: ""
	I1218 00:38:28.398006 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.398014 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:28.398023 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:28.398033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:28.453794 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:28.453812 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.468531 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:28.468547 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:28.536754 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:28.536784 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:28.536796 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:28.599155 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:28.599174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:31.143176 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:31.156254 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:31.156313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:31.185437 1311248 cri.go:89] found id: ""
	I1218 00:38:31.185452 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.185460 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:31.185472 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:31.185531 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:31.215130 1311248 cri.go:89] found id: ""
	I1218 00:38:31.215144 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.215153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:31.215157 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:31.215217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:31.240144 1311248 cri.go:89] found id: ""
	I1218 00:38:31.240157 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.240164 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:31.240169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:31.240227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:31.265058 1311248 cri.go:89] found id: ""
	I1218 00:38:31.265072 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.265079 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:31.265084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:31.265150 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:31.289354 1311248 cri.go:89] found id: ""
	I1218 00:38:31.289368 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.289375 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:31.289380 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:31.289438 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:31.319744 1311248 cri.go:89] found id: ""
	I1218 00:38:31.319758 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.319766 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:31.319771 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:31.319826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:31.343739 1311248 cri.go:89] found id: ""
	I1218 00:38:31.343753 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.343760 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:31.343768 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:31.343778 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:31.399267 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:31.399287 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:31.413578 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:31.413595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:31.478705 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:31.478714 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:31.478724 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:31.540680 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:31.540703 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.068816 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:34.079525 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:34.079589 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:34.106415 1311248 cri.go:89] found id: ""
	I1218 00:38:34.106432 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.106440 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:34.106445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:34.106506 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:34.131181 1311248 cri.go:89] found id: ""
	I1218 00:38:34.131195 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.131202 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:34.131208 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:34.131265 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:34.166885 1311248 cri.go:89] found id: ""
	I1218 00:38:34.166898 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.166906 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:34.166911 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:34.166970 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:34.197771 1311248 cri.go:89] found id: ""
	I1218 00:38:34.197786 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.197793 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:34.197798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:34.197856 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:34.226531 1311248 cri.go:89] found id: ""
	I1218 00:38:34.226546 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.226552 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:34.226557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:34.226614 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:34.252100 1311248 cri.go:89] found id: ""
	I1218 00:38:34.252114 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.252121 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:34.252127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:34.252185 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:34.278653 1311248 cri.go:89] found id: ""
	I1218 00:38:34.278667 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.278675 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:34.278683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:34.278694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:34.293444 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:34.293463 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:34.359201 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:34.359211 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:34.359221 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:34.420750 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:34.420773 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.449621 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:34.449637 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.006206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:37.019401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:37.019472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:37.047646 1311248 cri.go:89] found id: ""
	I1218 00:38:37.047660 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.047667 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:37.047673 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:37.047733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:37.076612 1311248 cri.go:89] found id: ""
	I1218 00:38:37.076646 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.076653 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:37.076658 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:37.076717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:37.102368 1311248 cri.go:89] found id: ""
	I1218 00:38:37.102383 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.102390 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:37.102395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:37.102452 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:37.126829 1311248 cri.go:89] found id: ""
	I1218 00:38:37.126843 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.126850 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:37.126855 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:37.126913 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:37.159965 1311248 cri.go:89] found id: ""
	I1218 00:38:37.159980 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.159987 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:37.159992 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:37.160048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:37.193535 1311248 cri.go:89] found id: ""
	I1218 00:38:37.193549 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.193558 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:37.193564 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:37.193622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:37.224708 1311248 cri.go:89] found id: ""
	I1218 00:38:37.224723 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.224730 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:37.224738 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:37.224749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:37.287765 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:37.287775 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:37.287787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:37.349218 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:37.349239 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:37.377886 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:37.377902 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.435205 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:37.435224 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:39.950327 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:39.960885 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:39.960948 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:39.985573 1311248 cri.go:89] found id: ""
	I1218 00:38:39.985587 1311248 logs.go:282] 0 containers: []
	W1218 00:38:39.985596 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:39.985602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:39.985662 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:40.020843 1311248 cri.go:89] found id: ""
	I1218 00:38:40.020859 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.020867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:40.020873 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:40.020949 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:40.067991 1311248 cri.go:89] found id: ""
	I1218 00:38:40.068007 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.068015 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:40.068021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:40.068096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:40.097024 1311248 cri.go:89] found id: ""
	I1218 00:38:40.097039 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.097047 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:40.097053 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:40.097118 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:40.127502 1311248 cri.go:89] found id: ""
	I1218 00:38:40.127518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.127526 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:40.127531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:40.127595 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:40.165566 1311248 cri.go:89] found id: ""
	I1218 00:38:40.165580 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.165587 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:40.165593 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:40.165660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:40.204927 1311248 cri.go:89] found id: ""
	I1218 00:38:40.204940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.204948 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:40.204956 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:40.204967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:40.222297 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:40.222314 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:40.292382 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:40.292392 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:40.292403 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:40.353852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:40.353871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:40.385828 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:40.385844 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:42.942427 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:42.952937 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:42.952996 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:42.982184 1311248 cri.go:89] found id: ""
	I1218 00:38:42.982201 1311248 logs.go:282] 0 containers: []
	W1218 00:38:42.982208 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:42.982213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:42.982271 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:43.009928 1311248 cri.go:89] found id: ""
	I1218 00:38:43.009944 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.009952 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:43.009957 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:43.010021 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:43.036384 1311248 cri.go:89] found id: ""
	I1218 00:38:43.036397 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.036405 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:43.036410 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:43.036472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:43.061945 1311248 cri.go:89] found id: ""
	I1218 00:38:43.061959 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.061967 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:43.061972 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:43.062030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:43.087977 1311248 cri.go:89] found id: ""
	I1218 00:38:43.087992 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.087999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:43.088005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:43.088069 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:43.113297 1311248 cri.go:89] found id: ""
	I1218 00:38:43.113312 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.113319 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:43.113324 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:43.113390 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:43.148378 1311248 cri.go:89] found id: ""
	I1218 00:38:43.148392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.148399 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:43.148408 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:43.148419 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:43.218202 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:43.218227 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:43.234424 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:43.234441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:43.295849 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:43.295860 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:43.295871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:43.357903 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:43.357924 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:45.889646 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:45.899918 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:45.899981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:45.923610 1311248 cri.go:89] found id: ""
	I1218 00:38:45.923623 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.923630 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:45.923635 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:45.923696 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:45.949282 1311248 cri.go:89] found id: ""
	I1218 00:38:45.949296 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.949304 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:45.949309 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:45.949371 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:45.974071 1311248 cri.go:89] found id: ""
	I1218 00:38:45.974085 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.974092 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:45.974097 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:45.974153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:45.997865 1311248 cri.go:89] found id: ""
	I1218 00:38:45.997880 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.997887 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:45.997892 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:45.997953 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:46.026399 1311248 cri.go:89] found id: ""
	I1218 00:38:46.026413 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.026426 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:46.026432 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:46.026490 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:46.060011 1311248 cri.go:89] found id: ""
	I1218 00:38:46.060026 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.060033 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:46.060038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:46.060097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:46.095378 1311248 cri.go:89] found id: ""
	I1218 00:38:46.095392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.095398 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:46.095407 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:46.095418 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:46.110828 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:46.110845 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:46.194637 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:46.194647 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:46.194657 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:46.265968 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:46.265989 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:46.298428 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:46.298444 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:48.855794 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:48.868391 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:48.868457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:48.898010 1311248 cri.go:89] found id: ""
	I1218 00:38:48.898024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.898032 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:48.898037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:48.898097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:48.926962 1311248 cri.go:89] found id: ""
	I1218 00:38:48.926976 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.926984 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:48.926989 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:48.927046 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:48.953073 1311248 cri.go:89] found id: ""
	I1218 00:38:48.953096 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.953104 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:48.953109 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:48.953171 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:48.978527 1311248 cri.go:89] found id: ""
	I1218 00:38:48.978542 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.978548 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:48.978554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:48.978611 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:49.005774 1311248 cri.go:89] found id: ""
	I1218 00:38:49.005791 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.005800 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:49.005805 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:49.005881 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:49.032714 1311248 cri.go:89] found id: ""
	I1218 00:38:49.032743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.032751 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:49.032756 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:49.032845 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:49.058437 1311248 cri.go:89] found id: ""
	I1218 00:38:49.058451 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.058459 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:49.058468 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:49.058478 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:49.114793 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:49.114813 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:49.129898 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:49.129916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:49.218168 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:49.218179 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:49.218190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:49.289574 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:49.289595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:51.822637 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:51.833100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:51.833161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:51.858494 1311248 cri.go:89] found id: ""
	I1218 00:38:51.858508 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.858515 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:51.858520 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:51.858609 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:51.883202 1311248 cri.go:89] found id: ""
	I1218 00:38:51.883217 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.883224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:51.883229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:51.883286 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:51.911732 1311248 cri.go:89] found id: ""
	I1218 00:38:51.911746 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.911753 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:51.911758 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:51.911813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:51.937059 1311248 cri.go:89] found id: ""
	I1218 00:38:51.937073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.937080 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:51.937086 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:51.937144 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:51.960983 1311248 cri.go:89] found id: ""
	I1218 00:38:51.960998 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.961016 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:51.961021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:51.961095 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:51.985889 1311248 cri.go:89] found id: ""
	I1218 00:38:51.985904 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.985911 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:51.985916 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:51.985976 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:52.012132 1311248 cri.go:89] found id: ""
	I1218 00:38:52.012147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:52.012155 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:52.012163 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:52.012174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:52.080718 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:52.080736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:52.080748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:52.144427 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:52.144446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:52.176847 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:52.176869 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:52.239307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:52.239325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:54.754340 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:54.764793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:54.764857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:54.794012 1311248 cri.go:89] found id: ""
	I1218 00:38:54.794027 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.794034 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:54.794039 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:54.794096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:54.823133 1311248 cri.go:89] found id: ""
	I1218 00:38:54.823147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.823155 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:54.823160 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:54.823216 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:54.847977 1311248 cri.go:89] found id: ""
	I1218 00:38:54.847991 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.847998 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:54.848003 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:54.848064 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:54.873449 1311248 cri.go:89] found id: ""
	I1218 00:38:54.873462 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.873469 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:54.873475 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:54.873532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:54.897891 1311248 cri.go:89] found id: ""
	I1218 00:38:54.897905 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.897922 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:54.897928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:54.897985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:54.922432 1311248 cri.go:89] found id: ""
	I1218 00:38:54.922449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.922456 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:54.922462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:54.922520 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:54.947869 1311248 cri.go:89] found id: ""
	I1218 00:38:54.947884 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.947908 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:54.947916 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:54.947927 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:55.005409 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:55.005434 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:55.026491 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:55.026508 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:55.094641 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:55.094652 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:55.094663 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:55.159462 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:55.159481 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.695023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:57.706079 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:57.706147 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:57.735083 1311248 cri.go:89] found id: ""
	I1218 00:38:57.735106 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.735114 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:57.735119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:57.735178 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:57.762228 1311248 cri.go:89] found id: ""
	I1218 00:38:57.762242 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.762249 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:57.762255 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:57.762313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:57.787211 1311248 cri.go:89] found id: ""
	I1218 00:38:57.787226 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.787233 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:57.787238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:57.787303 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:57.812671 1311248 cri.go:89] found id: ""
	I1218 00:38:57.812686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.812693 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:57.812699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:57.812762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:57.840939 1311248 cri.go:89] found id: ""
	I1218 00:38:57.840953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.840961 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:57.840966 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:57.841031 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:57.867148 1311248 cri.go:89] found id: ""
	I1218 00:38:57.867163 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.867170 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:57.867175 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:57.867232 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:57.891633 1311248 cri.go:89] found id: ""
	I1218 00:38:57.891648 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.891665 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:57.891674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:57.891684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.918896 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:57.918913 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:57.975605 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:57.975625 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:57.990660 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:57.990676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:58.063038 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:58.063048 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:58.063061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.627359 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:00.638675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:00.638768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:00.669731 1311248 cri.go:89] found id: ""
	I1218 00:39:00.669745 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.669752 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:00.669757 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:00.669824 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:00.697124 1311248 cri.go:89] found id: ""
	I1218 00:39:00.697138 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.697145 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:00.697151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:00.697211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:00.722455 1311248 cri.go:89] found id: ""
	I1218 00:39:00.722469 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.722476 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:00.722486 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:00.722545 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:00.750996 1311248 cri.go:89] found id: ""
	I1218 00:39:00.751010 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.751018 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:00.751023 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:00.751091 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:00.780012 1311248 cri.go:89] found id: ""
	I1218 00:39:00.780026 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.780033 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:00.780038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:00.780105 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:00.807119 1311248 cri.go:89] found id: ""
	I1218 00:39:00.807133 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.807140 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:00.807145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:00.807213 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:00.836658 1311248 cri.go:89] found id: ""
	I1218 00:39:00.836673 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.836681 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:00.836689 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:00.836699 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:00.851616 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:00.851633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:00.919909 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:00.919918 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:00.919929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.985802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:00.985823 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:01.017691 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:01.017707 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.574413 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:03.585024 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:03.585088 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:03.615721 1311248 cri.go:89] found id: ""
	I1218 00:39:03.615735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.615742 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:03.615748 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:03.615811 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:03.641216 1311248 cri.go:89] found id: ""
	I1218 00:39:03.641230 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.641237 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:03.641243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:03.641307 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:03.665604 1311248 cri.go:89] found id: ""
	I1218 00:39:03.665618 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.665625 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:03.665639 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:03.665717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:03.690936 1311248 cri.go:89] found id: ""
	I1218 00:39:03.690951 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.690958 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:03.690970 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:03.691030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:03.716763 1311248 cri.go:89] found id: ""
	I1218 00:39:03.716794 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.716806 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:03.716811 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:03.716898 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:03.742156 1311248 cri.go:89] found id: ""
	I1218 00:39:03.742170 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.742177 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:03.742183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:03.742240 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:03.771205 1311248 cri.go:89] found id: ""
	I1218 00:39:03.771220 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.771227 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:03.771235 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:03.771245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:03.834106 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:03.834127 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:03.863112 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:03.863129 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.919444 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:03.919465 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:03.934588 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:03.934607 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:04.000293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.500788 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:06.511530 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:06.511596 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:06.536538 1311248 cri.go:89] found id: ""
	I1218 00:39:06.536554 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.536562 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:06.536568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:06.536651 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:06.565199 1311248 cri.go:89] found id: ""
	I1218 00:39:06.565213 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.565219 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:06.565224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:06.565283 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:06.589614 1311248 cri.go:89] found id: ""
	I1218 00:39:06.589628 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.589636 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:06.589641 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:06.589700 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:06.614004 1311248 cri.go:89] found id: ""
	I1218 00:39:06.614019 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.614027 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:06.614032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:06.614093 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:06.638819 1311248 cri.go:89] found id: ""
	I1218 00:39:06.638833 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.638841 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:06.638846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:06.638908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:06.666620 1311248 cri.go:89] found id: ""
	I1218 00:39:06.666634 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.666643 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:06.666648 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:06.666707 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:06.694192 1311248 cri.go:89] found id: ""
	I1218 00:39:06.694207 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.694216 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:06.694224 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:06.694235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:06.709318 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:06.709336 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:06.773553 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.773564 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:06.773587 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:06.842917 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:06.842937 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:06.877280 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:06.877296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.433923 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:09.445181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:09.445248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:09.470100 1311248 cri.go:89] found id: ""
	I1218 00:39:09.470115 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.470122 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:09.470127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:09.470184 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:09.499949 1311248 cri.go:89] found id: ""
	I1218 00:39:09.499964 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.499973 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:09.499978 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:09.500044 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:09.526313 1311248 cri.go:89] found id: ""
	I1218 00:39:09.526328 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.526335 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:09.526340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:09.526404 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:09.551831 1311248 cri.go:89] found id: ""
	I1218 00:39:09.551844 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.551851 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:09.551857 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:09.551923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:09.577535 1311248 cri.go:89] found id: ""
	I1218 00:39:09.577549 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.577557 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:09.577561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:09.577622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:09.602570 1311248 cri.go:89] found id: ""
	I1218 00:39:09.602584 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.602591 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:09.602597 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:09.602658 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:09.630715 1311248 cri.go:89] found id: ""
	I1218 00:39:09.630729 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.630736 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:09.630745 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:09.630755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.686840 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:09.686859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:09.703315 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:09.703331 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:09.770650 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:09.770660 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:09.770670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:09.832439 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:09.832457 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:12.361961 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:12.372127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:12.372190 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:12.408061 1311248 cri.go:89] found id: ""
	I1218 00:39:12.408075 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.408082 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:12.408088 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:12.408145 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:12.434860 1311248 cri.go:89] found id: ""
	I1218 00:39:12.434874 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.434881 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:12.434886 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:12.434946 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:12.465255 1311248 cri.go:89] found id: ""
	I1218 00:39:12.465270 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.465278 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:12.465283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:12.465341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:12.494330 1311248 cri.go:89] found id: ""
	I1218 00:39:12.494344 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.494350 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:12.494356 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:12.494420 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:12.518885 1311248 cri.go:89] found id: ""
	I1218 00:39:12.518900 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.518907 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:12.518912 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:12.518973 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:12.543549 1311248 cri.go:89] found id: ""
	I1218 00:39:12.543564 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.543573 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:12.543578 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:12.543641 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:12.568469 1311248 cri.go:89] found id: ""
	I1218 00:39:12.568483 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.568500 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:12.568507 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:12.568519 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:12.624017 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:12.624039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:12.639011 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:12.639028 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:12.703723 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:12.703734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:12.703744 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:12.765331 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:12.765350 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.294913 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:15.308145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:15.308210 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:15.340203 1311248 cri.go:89] found id: ""
	I1218 00:39:15.340218 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.340225 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:15.340230 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:15.340289 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:15.367732 1311248 cri.go:89] found id: ""
	I1218 00:39:15.367747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.367754 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:15.367760 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:15.367818 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:15.398027 1311248 cri.go:89] found id: ""
	I1218 00:39:15.398042 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.398049 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:15.398055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:15.398115 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:15.430352 1311248 cri.go:89] found id: ""
	I1218 00:39:15.430366 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.430373 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:15.430379 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:15.430442 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:15.461268 1311248 cri.go:89] found id: ""
	I1218 00:39:15.461283 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.461291 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:15.461297 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:15.461361 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:15.487656 1311248 cri.go:89] found id: ""
	I1218 00:39:15.487671 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.487678 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:15.487684 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:15.487744 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:15.516835 1311248 cri.go:89] found id: ""
	I1218 00:39:15.516850 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.516858 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:15.516867 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:15.516877 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:15.584348 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:15.584357 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:15.584377 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:15.646829 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:15.646849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.675913 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:15.675929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:15.731421 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:15.731441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.246605 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:18.257277 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:18.257340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:18.282497 1311248 cri.go:89] found id: ""
	I1218 00:39:18.282512 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.282519 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:18.282527 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:18.282594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:18.317178 1311248 cri.go:89] found id: ""
	I1218 00:39:18.317193 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.317200 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:18.317205 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:18.317267 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:18.342018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.342032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.342039 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:18.342044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:18.342098 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:18.366018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.366032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.366040 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:18.366045 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:18.366107 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:18.390880 1311248 cri.go:89] found id: ""
	I1218 00:39:18.390894 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.390902 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:18.390908 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:18.390968 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:18.427152 1311248 cri.go:89] found id: ""
	I1218 00:39:18.427167 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.427174 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:18.427181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:18.427241 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:18.458481 1311248 cri.go:89] found id: ""
	I1218 00:39:18.458495 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.458502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:18.458510 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:18.458521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:18.486379 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:18.486397 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:18.546371 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:18.546396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.561410 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:18.561431 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:18.625094 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:18.625105 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:18.625118 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.187071 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:21.197777 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:21.197842 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:21.228457 1311248 cri.go:89] found id: ""
	I1218 00:39:21.228472 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.228479 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:21.228485 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:21.228551 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:21.254227 1311248 cri.go:89] found id: ""
	I1218 00:39:21.254240 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.254258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:21.254264 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:21.254321 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:21.283166 1311248 cri.go:89] found id: ""
	I1218 00:39:21.283180 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.283187 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:21.283193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:21.283259 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:21.307940 1311248 cri.go:89] found id: ""
	I1218 00:39:21.307954 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.307962 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:21.307967 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:21.308022 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:21.333576 1311248 cri.go:89] found id: ""
	I1218 00:39:21.333590 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.333597 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:21.333602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:21.333660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:21.357404 1311248 cri.go:89] found id: ""
	I1218 00:39:21.357418 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.357425 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:21.357430 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:21.357488 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:21.386789 1311248 cri.go:89] found id: ""
	I1218 00:39:21.386803 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.386811 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:21.386819 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:21.386830 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:21.467813 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:21.467824 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:21.467834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.529999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:21.530019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:21.561213 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:21.561228 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:21.619110 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:21.619128 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.133884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:24.144224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:24.144298 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:24.169895 1311248 cri.go:89] found id: ""
	I1218 00:39:24.169909 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.169916 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:24.169922 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:24.169981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:24.196376 1311248 cri.go:89] found id: ""
	I1218 00:39:24.196390 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.196396 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:24.196401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:24.196464 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:24.220959 1311248 cri.go:89] found id: ""
	I1218 00:39:24.220978 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.220986 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:24.220991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:24.221051 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:24.246721 1311248 cri.go:89] found id: ""
	I1218 00:39:24.246735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.246745 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:24.246751 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:24.246819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:24.271380 1311248 cri.go:89] found id: ""
	I1218 00:39:24.271394 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.271401 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:24.271406 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:24.271466 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:24.298631 1311248 cri.go:89] found id: ""
	I1218 00:39:24.298645 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.298652 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:24.298657 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:24.298713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:24.322933 1311248 cri.go:89] found id: ""
	I1218 00:39:24.322947 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.322965 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:24.322974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:24.322984 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:24.378307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:24.378325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.395279 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:24.395296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:24.478731 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:24.478740 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:24.478750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:24.539558 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:24.539578 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.069527 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:27.079511 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:27.079570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:27.104730 1311248 cri.go:89] found id: ""
	I1218 00:39:27.104747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.104754 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:27.104759 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:27.104826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:27.134528 1311248 cri.go:89] found id: ""
	I1218 00:39:27.134543 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.134551 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:27.134556 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:27.134618 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:27.160290 1311248 cri.go:89] found id: ""
	I1218 00:39:27.160304 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.160311 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:27.160316 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:27.160374 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:27.187607 1311248 cri.go:89] found id: ""
	I1218 00:39:27.187621 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.187628 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:27.187634 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:27.187691 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:27.214602 1311248 cri.go:89] found id: ""
	I1218 00:39:27.214616 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.214623 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:27.214630 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:27.214690 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:27.239452 1311248 cri.go:89] found id: ""
	I1218 00:39:27.239466 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.239474 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:27.239479 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:27.239538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:27.268209 1311248 cri.go:89] found id: ""
	I1218 00:39:27.268232 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.268240 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:27.268248 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:27.268259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:27.283007 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:27.283033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:27.351624 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:27.351634 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:27.351644 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:27.414794 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:27.414814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.449027 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:27.449042 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.008353 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:30.051512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:30.051599 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:30.142207 1311248 cri.go:89] found id: ""
	I1218 00:39:30.142226 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.142234 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:30.142241 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:30.142317 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:30.175952 1311248 cri.go:89] found id: ""
	I1218 00:39:30.175967 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.175979 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:30.175985 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:30.176054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:30.202613 1311248 cri.go:89] found id: ""
	I1218 00:39:30.202640 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.202649 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:30.202655 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:30.202718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:30.229638 1311248 cri.go:89] found id: ""
	I1218 00:39:30.229653 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.229661 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:30.229666 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:30.229728 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:30.261192 1311248 cri.go:89] found id: ""
	I1218 00:39:30.261206 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.261214 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:30.261220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:30.261285 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:30.288158 1311248 cri.go:89] found id: ""
	I1218 00:39:30.288173 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.288180 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:30.288189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:30.288251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:30.314418 1311248 cri.go:89] found id: ""
	I1218 00:39:30.314432 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.314441 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:30.314450 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:30.314462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.369830 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:30.369849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:30.385018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:30.385037 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:30.467908 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:30.467920 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:30.467930 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:30.529075 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:30.529095 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:33.059241 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:33.070119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:33.070182 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:33.095716 1311248 cri.go:89] found id: ""
	I1218 00:39:33.095730 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.095738 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:33.095744 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:33.095804 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:33.121681 1311248 cri.go:89] found id: ""
	I1218 00:39:33.121697 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.121711 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:33.121717 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:33.121783 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:33.147424 1311248 cri.go:89] found id: ""
	I1218 00:39:33.147438 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.147445 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:33.147451 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:33.147514 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:33.173916 1311248 cri.go:89] found id: ""
	I1218 00:39:33.173931 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.173938 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:33.173943 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:33.174004 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:33.199675 1311248 cri.go:89] found id: ""
	I1218 00:39:33.199690 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.199697 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:33.199702 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:33.199761 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:33.229684 1311248 cri.go:89] found id: ""
	I1218 00:39:33.229698 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.229706 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:33.229711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:33.229771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:33.255931 1311248 cri.go:89] found id: ""
	I1218 00:39:33.255955 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.255963 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:33.255971 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:33.255981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:33.312520 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:33.312538 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:33.327008 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:33.327024 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:33.392853 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:33.392863 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:33.392873 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:33.462852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:33.462872 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:35.991111 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:36.001578 1311248 kubeadm.go:602] duration metric: took 4m4.636770246s to restartPrimaryControlPlane
	W1218 00:39:36.001631 1311248 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1218 00:39:36.001712 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:39:36.428039 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:39:36.441875 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:39:36.449799 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:39:36.449855 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:39:36.457535 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:39:36.457543 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:39:36.457593 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:39:36.465339 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:39:36.465393 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:39:36.472406 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:39:36.480110 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:39:36.480163 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:39:36.487432 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.494964 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:39:36.495019 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.502375 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:39:36.509914 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:39:36.509976 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:39:36.517325 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:39:36.642706 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:39:36.643096 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:39:36.709498 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:43:38.241451 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 00:43:38.241477 1311248 kubeadm.go:319] 
	I1218 00:43:38.241546 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:43:38.245587 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.245639 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.245728 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.245779 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.245813 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.245856 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.245904 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.245947 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.246021 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.246074 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.246124 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.246169 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.246253 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.246316 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.246394 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.246489 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.246578 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.246661 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.249668 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.249761 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.249825 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.249900 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.249985 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.250056 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.250107 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.250167 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.250231 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.250306 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.250386 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.250429 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.250494 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:38.250547 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:38.250611 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:38.250669 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:38.250731 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:38.250784 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:38.250896 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:38.250969 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:38.255653 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:38.255752 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:38.255840 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:38.255905 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:38.256008 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:38.256128 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:38.256248 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:38.256329 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:38.256365 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:38.256499 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:38.256681 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:43:38.256752 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000096267s
	I1218 00:43:38.256755 1311248 kubeadm.go:319] 
	I1218 00:43:38.256814 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:43:38.256853 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:43:38.256963 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:43:38.256967 1311248 kubeadm.go:319] 
	I1218 00:43:38.257093 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:43:38.257126 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:43:38.257155 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:43:38.257212 1311248 kubeadm.go:319] 
	W1218 00:43:38.257278 1311248 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000096267s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 00:43:38.257393 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:43:38.672580 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:43:38.686195 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:43:38.686247 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:43:38.694107 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:43:38.694119 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:43:38.694170 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:43:38.702289 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:43:38.702343 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:43:38.710380 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:43:38.718160 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:43:38.718218 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:43:38.726244 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.734209 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:43:38.734268 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.741907 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:43:38.749716 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:43:38.749773 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:43:38.757471 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:43:38.797919 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.797966 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.877731 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.877795 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.877835 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.877879 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.877926 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.877972 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.878019 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.878065 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.878112 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.878155 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.878202 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.878247 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.941330 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.941446 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.941535 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.951935 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.957317 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.957410 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.957474 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.957580 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.957646 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.957723 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.957784 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.957852 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.957913 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.957987 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.958059 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.958095 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.958151 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:39.202920 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:39.377892 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:39.964483 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:40.103558 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:40.457630 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:40.458383 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:40.462089 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:40.465489 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:40.465583 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:40.465654 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:40.465716 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:40.486385 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:40.486497 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:40.494535 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:40.494848 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:40.495030 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:40.625355 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:40.625497 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:47:40.625149 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000298437s
	I1218 00:47:40.625174 1311248 kubeadm.go:319] 
	I1218 00:47:40.625227 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:47:40.625262 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:47:40.625362 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:47:40.625367 1311248 kubeadm.go:319] 
	I1218 00:47:40.625481 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:47:40.625513 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:47:40.625550 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:47:40.625553 1311248 kubeadm.go:319] 
	I1218 00:47:40.629455 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:47:40.629954 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:47:40.630083 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:47:40.630316 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 00:47:40.630321 1311248 kubeadm.go:319] 
	I1218 00:47:40.630384 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:47:40.630455 1311248 kubeadm.go:403] duration metric: took 12m9.299018648s to StartCluster
	I1218 00:47:40.630487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:47:40.630549 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:47:40.655474 1311248 cri.go:89] found id: ""
	I1218 00:47:40.655489 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.655497 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:47:40.655502 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:47:40.655558 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:47:40.681677 1311248 cri.go:89] found id: ""
	I1218 00:47:40.681692 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.681699 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:47:40.681705 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:47:40.681772 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:47:40.714293 1311248 cri.go:89] found id: ""
	I1218 00:47:40.714307 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.714314 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:47:40.714319 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:47:40.714379 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:47:40.739065 1311248 cri.go:89] found id: ""
	I1218 00:47:40.739089 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.739097 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:47:40.739102 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:47:40.739168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:47:40.763653 1311248 cri.go:89] found id: ""
	I1218 00:47:40.763666 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.763673 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:47:40.763678 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:47:40.763737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:47:40.789038 1311248 cri.go:89] found id: ""
	I1218 00:47:40.789052 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.789059 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:47:40.789065 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:47:40.789124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:47:40.817866 1311248 cri.go:89] found id: ""
	I1218 00:47:40.817880 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.817887 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:47:40.817895 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:47:40.817905 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:47:40.877071 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:47:40.877090 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:47:40.891818 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:47:40.891835 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:47:40.956585 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:47:40.956595 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:47:40.956605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:47:41.023372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:47:41.023390 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 00:47:41.051126 1311248 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 00:47:41.051157 1311248 out.go:285] * 
	W1218 00:47:41.051213 1311248 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.051229 1311248 out.go:285] * 
	W1218 00:47:41.053388 1311248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:47:41.058223 1311248 out.go:203] 
	W1218 00:47:41.061890 1311248 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.061936 1311248 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 00:47:41.061956 1311248 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 00:47:41.065091 1311248 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.480115132Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.479679470Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.482375935Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.484746123Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.493400844Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.832040692Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.834441140Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.842565463Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.843007052Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.134966568Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.137526298Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.142612413Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.150391104Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.447523093Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.449756341Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.461849843Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.462352304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.465606883Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.468013616Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.471019652Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.479506099Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.295881886Z" level=info msg="No images store for sha256:fbee3dfdb946545a8487e59f5adaf8b308b880e0a9660068998d6d7ea3033fed"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.298353921Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307420645Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307912686Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:49:35.084892   23240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:35.085700   23240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:35.087360   23240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:35.087729   23240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:35.089231   23240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:49:35 up  7:32,  0 user,  load average: 0.49, 0.37, 0.46
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:49:32 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:32 functional-232602 kubelet[23063]: E1218 00:49:32.208988   23063 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:32 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:32 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:32 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 469.
	Dec 18 00:49:32 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:32 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:32 functional-232602 kubelet[23084]: E1218 00:49:32.960961   23084 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:32 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:32 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:33 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 470.
	Dec 18 00:49:33 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:33 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:33 functional-232602 kubelet[23121]: E1218 00:49:33.699551   23121 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:33 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:33 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:34 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 471.
	Dec 18 00:49:34 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:34 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:34 functional-232602 kubelet[23156]: E1218 00:49:34.465539   23156 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:34 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:34 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:35 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 472.
	Dec 18 00:49:35 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:35 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (388.09025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (3.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-232602 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-232602 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.534053ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-232602 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-232602 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-232602 describe po hello-node-connect: exit status 1 (56.912061ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-232602 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-232602 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-232602 logs -l app=hello-node-connect: exit status 1 (60.378575ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-232602 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-232602 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-232602 describe svc hello-node-connect: exit status 1 (60.530785ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-232602 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (343.222263ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/1261148.pem                                                                                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /usr/share/ca-certificates/1261148.pem                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image ls                                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image save kicbase/echo-server:functional-232602 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/12611482.pem                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /usr/share/ca-certificates/12611482.pem                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image ls                                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo cat /etc/test/nested/copy/1261148/hosts                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image ls                                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service list                                                                                                                                  │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ image   │ functional-232602 image save --daemon kicbase/echo-server:functional-232602 --alsologtostderr                                                                   │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service list -o json                                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh echo hello                                                                                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service --namespace=default --https --url hello-node                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh cat /etc/hostname                                                                                                                         │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ service │ functional-232602 service hello-node --url --format={{.IP}}                                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ tunnel  │ functional-232602 tunnel --alsologtostderr                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ tunnel  │ functional-232602 tunnel --alsologtostderr                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ service │ functional-232602 service hello-node --url                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ tunnel  │ functional-232602 tunnel --alsologtostderr                                                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ addons  │ functional-232602 addons list                                                                                                                                   │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ addons  │ functional-232602 addons list -o json                                                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:35:27.044902 1311248 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:35:27.045002 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045006 1311248 out.go:374] Setting ErrFile to fd 2...
	I1218 00:35:27.045010 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045249 1311248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:35:27.045606 1311248 out.go:368] Setting JSON to false
	I1218 00:35:27.046406 1311248 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26273,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:35:27.046458 1311248 start.go:143] virtualization:  
	I1218 00:35:27.049930 1311248 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:35:27.052925 1311248 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:35:27.053012 1311248 notify.go:221] Checking for updates...
	I1218 00:35:27.058856 1311248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:35:27.061872 1311248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:35:27.064792 1311248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:35:27.067743 1311248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:35:27.070676 1311248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:35:27.074096 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:27.074190 1311248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:35:27.106641 1311248 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:35:27.106748 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.164302 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.154715728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.164392 1311248 docker.go:319] overlay module found
	I1218 00:35:27.167427 1311248 out.go:179] * Using the docker driver based on existing profile
	I1218 00:35:27.170281 1311248 start.go:309] selected driver: docker
	I1218 00:35:27.170292 1311248 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.170444 1311248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:35:27.170546 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.230048 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.221277832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.230469 1311248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 00:35:27.230491 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:27.230542 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:27.230580 1311248 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.235511 1311248 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:35:27.238271 1311248 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:35:27.241192 1311248 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:35:27.243943 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:27.243991 1311248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:35:27.243999 1311248 cache.go:65] Caching tarball of preloaded images
	I1218 00:35:27.244040 1311248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:35:27.244087 1311248 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:35:27.244096 1311248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:35:27.244211 1311248 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:35:27.263574 1311248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:35:27.263584 1311248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:35:27.263598 1311248 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:35:27.263628 1311248 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:35:27.263679 1311248 start.go:364] duration metric: took 35.445µs to acquireMachinesLock for "functional-232602"
	I1218 00:35:27.263697 1311248 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:35:27.263701 1311248 fix.go:54] fixHost starting: 
	I1218 00:35:27.263946 1311248 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:35:27.280222 1311248 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:35:27.280243 1311248 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:35:27.283327 1311248 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:35:27.283352 1311248 machine.go:94] provisionDockerMachine start ...
	I1218 00:35:27.283428 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.299920 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.300231 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.300238 1311248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:35:27.452356 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.452370 1311248 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:35:27.452432 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.473471 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.473816 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.473825 1311248 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:35:27.640067 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.640142 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.667013 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.667323 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.667342 1311248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:35:27.820945 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:35:27.820961 1311248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:35:27.820980 1311248 ubuntu.go:190] setting up certificates
	I1218 00:35:27.820989 1311248 provision.go:84] configureAuth start
	I1218 00:35:27.821051 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:27.838852 1311248 provision.go:143] copyHostCerts
	I1218 00:35:27.838916 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:35:27.838924 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:35:27.838994 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:35:27.839097 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:35:27.839100 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:35:27.839128 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:35:27.839186 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:35:27.839190 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:35:27.839213 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:35:27.839265 1311248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:35:28.109890 1311248 provision.go:177] copyRemoteCerts
	I1218 00:35:28.109947 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:35:28.109996 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.127232 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.232344 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:35:28.250086 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:35:28.268448 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:35:28.286339 1311248 provision.go:87] duration metric: took 465.326862ms to configureAuth
	I1218 00:35:28.286357 1311248 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:35:28.286550 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:28.286556 1311248 machine.go:97] duration metric: took 1.003199883s to provisionDockerMachine
	I1218 00:35:28.286562 1311248 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:35:28.286572 1311248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:35:28.286620 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:35:28.286663 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.304025 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.412869 1311248 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:35:28.416834 1311248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:35:28.416854 1311248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:35:28.416865 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:35:28.416921 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:35:28.417025 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:35:28.417099 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:35:28.417168 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:35:28.424798 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:28.442733 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:35:28.462911 1311248 start.go:296] duration metric: took 176.334186ms for postStartSetup
	I1218 00:35:28.462983 1311248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:35:28.463039 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.480489 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.585769 1311248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:35:28.590837 1311248 fix.go:56] duration metric: took 1.327128154s for fixHost
	I1218 00:35:28.590854 1311248 start.go:83] releasing machines lock for "functional-232602", held for 1.327167711s
	I1218 00:35:28.590944 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:28.607738 1311248 ssh_runner.go:195] Run: cat /version.json
	I1218 00:35:28.607789 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.608049 1311248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:35:28.608095 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.626689 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.634380 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.732432 1311248 ssh_runner.go:195] Run: systemctl --version
	I1218 00:35:28.823477 1311248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 00:35:28.828399 1311248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:35:28.828467 1311248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:35:28.836277 1311248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:35:28.836291 1311248 start.go:496] detecting cgroup driver to use...
	I1218 00:35:28.836322 1311248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:35:28.836377 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:35:28.852038 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:35:28.865568 1311248 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:35:28.865634 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:35:28.881324 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:35:28.894482 1311248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:35:29.019814 1311248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:35:29.139455 1311248 docker.go:234] disabling docker service ...
	I1218 00:35:29.139511 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:35:29.157302 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:35:29.172520 1311248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:35:29.290798 1311248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:35:29.409846 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:35:29.423039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:35:29.438313 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:35:29.447458 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:35:29.457161 1311248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:35:29.457221 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:35:29.466703 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.475761 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:35:29.484925 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.493811 1311248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:35:29.502125 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:35:29.511205 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:35:29.520548 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:35:29.530343 1311248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:35:29.538157 1311248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:35:29.545765 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:29.664409 1311248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:35:29.789454 1311248 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:35:29.789537 1311248 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:35:29.793414 1311248 start.go:564] Will wait 60s for crictl version
	I1218 00:35:29.793467 1311248 ssh_runner.go:195] Run: which crictl
	I1218 00:35:29.796922 1311248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:35:29.821478 1311248 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:35:29.821534 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.845973 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.874969 1311248 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:35:29.877886 1311248 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:35:29.897397 1311248 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:35:29.909164 1311248 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1218 00:35:29.912023 1311248 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:35:29.912156 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:29.912246 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.959601 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.959615 1311248 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:35:29.959670 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.987018 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.987029 1311248 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:35:29.987035 1311248 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:35:29.987151 1311248 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:35:29.987219 1311248 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:35:30.033188 1311248 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1218 00:35:30.033262 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:30.033272 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:30.033285 1311248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:35:30.033322 1311248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:35:30.033459 1311248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:35:30.033555 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:35:30.044133 1311248 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:35:30.044224 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:35:30.053566 1311248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:35:30.069600 1311248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:35:30.086185 1311248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1218 00:35:30.100953 1311248 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:35:30.105204 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:30.229133 1311248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:35:30.643842 1311248 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:35:30.643853 1311248 certs.go:195] generating shared ca certs ...
	I1218 00:35:30.643868 1311248 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:35:30.644040 1311248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:35:30.644079 1311248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:35:30.644085 1311248 certs.go:257] generating profile certs ...
	I1218 00:35:30.644187 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:35:30.644248 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:35:30.644287 1311248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:35:30.644391 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:35:30.644420 1311248 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:35:30.644426 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:35:30.644455 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:35:30.644481 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:35:30.644512 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:35:30.644557 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:30.645271 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:35:30.667963 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:35:30.688789 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:35:30.707638 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:35:30.727172 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:35:30.745582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:35:30.763537 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:35:30.781521 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:35:30.799255 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:35:30.816582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:35:30.835230 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:35:30.852513 1311248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:35:30.865555 1311248 ssh_runner.go:195] Run: openssl version
	I1218 00:35:30.871911 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.879397 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:35:30.886681 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890109 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890169 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.930894 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:35:30.938142 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.945286 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:35:30.952538 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956151 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956245 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.997157 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:35:31.005056 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.014006 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:35:31.022034 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025894 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025961 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.067200 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:35:31.075278 1311248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:35:31.079306 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:35:31.123391 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:35:31.165879 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:35:31.208281 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:35:31.249146 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:35:31.290212 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:35:31.331444 1311248 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:31.331522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:35:31.331580 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.356945 1311248 cri.go:89] found id: ""
	I1218 00:35:31.357003 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:35:31.364788 1311248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:35:31.364798 1311248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:35:31.364876 1311248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:35:31.372428 1311248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.372951 1311248 kubeconfig.go:125] found "functional-232602" server: "https://192.168.49.2:8441"
	I1218 00:35:31.374199 1311248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:35:31.382218 1311248 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-18 00:20:57.479200490 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-18 00:35:30.095938034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1218 00:35:31.382230 1311248 kubeadm.go:1161] stopping kube-system containers ...
	I1218 00:35:31.382240 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1218 00:35:31.382293 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.418635 1311248 cri.go:89] found id: ""
	I1218 00:35:31.418695 1311248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 00:35:31.437319 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:35:31.447695 1311248 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 18 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 18 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 18 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 18 00:25 /etc/kubernetes/scheduler.conf
	
	I1218 00:35:31.447757 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:35:31.455511 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:35:31.463139 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.463194 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:35:31.470550 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.478132 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.478200 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.485959 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:35:31.493702 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.493757 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:35:31.501195 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:35:31.509596 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:31.563212 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:32.882945 1311248 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.319707666s)
	I1218 00:35:32.883005 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.109967 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.178681 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.229970 1311248 api_server.go:52] waiting for apiserver process to appear ...
	I1218 00:35:33.230040 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:33.730927 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.230378 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.730284 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.230343 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.730919 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.730993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.230539 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.731124 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.230838 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.730863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.230678 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.730230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.230236 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.731068 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.231109 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.730288 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.230203 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.730234 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.230141 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.730185 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.231143 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.730804 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.237230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.230803 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.730882 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.230533 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.731147 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.230905 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.730814 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.230754 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.730337 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.230375 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.731190 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.230987 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.731023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.230495 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.730322 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.230929 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.730922 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.231058 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.730458 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.230148 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.230494 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.731136 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.231080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.730219 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.230880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.730261 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.230265 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.730444 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.230228 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.730965 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.231030 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.730793 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.231094 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.730432 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.230277 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.730969 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.230206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.731080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.230777 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.730718 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.231042 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.730199 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.230478 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.730807 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.230613 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.730187 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.231163 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.731095 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.231010 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.731081 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.230167 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.730331 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.230144 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.730362 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.230993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.230791 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.731035 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.230946 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.730274 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.230238 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.730202 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.231089 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.730821 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.230480 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.730348 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.230188 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.730212 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.230315 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.730113 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.231120 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.730951 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.230491 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.730452 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.230231 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.730205 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.230525 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.230233 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.731067 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.231079 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.730956 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.230990 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.730196 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.230863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.730884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.230380 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.730826 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.731192 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.230615 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.730900 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.230553 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.730134 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:33.230238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:33.230314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:33.258458 1311248 cri.go:89] found id: ""
	I1218 00:36:33.258472 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.258484 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:33.258490 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:33.258562 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:33.283965 1311248 cri.go:89] found id: ""
	I1218 00:36:33.283979 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.283986 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:33.283991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:33.284048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:33.308663 1311248 cri.go:89] found id: ""
	I1218 00:36:33.308678 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.308693 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:33.308699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:33.308760 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:33.337762 1311248 cri.go:89] found id: ""
	I1218 00:36:33.337775 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.337783 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:33.337788 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:33.337852 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:33.366489 1311248 cri.go:89] found id: ""
	I1218 00:36:33.366503 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.366510 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:33.366515 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:33.366574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:33.401983 1311248 cri.go:89] found id: ""
	I1218 00:36:33.401998 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.402005 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:33.402010 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:33.402067 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:33.436853 1311248 cri.go:89] found id: ""
	I1218 00:36:33.436867 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.436874 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:33.436883 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:33.436893 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:33.504087 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:33.504097 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:33.504107 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:33.570523 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:33.570549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:33.607484 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:33.607500 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:33.664867 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:33.664884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.181388 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:36.191464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:36.191521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:36.214848 1311248 cri.go:89] found id: ""
	I1218 00:36:36.214863 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.214870 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:36.214876 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:36.214933 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:36.241311 1311248 cri.go:89] found id: ""
	I1218 00:36:36.241324 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.241331 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:36.241336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:36.241394 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:36.265257 1311248 cri.go:89] found id: ""
	I1218 00:36:36.265271 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.265279 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:36.265284 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:36.265343 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:36.288492 1311248 cri.go:89] found id: ""
	I1218 00:36:36.288506 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.288513 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:36.288518 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:36.288574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:36.316558 1311248 cri.go:89] found id: ""
	I1218 00:36:36.316573 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.316580 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:36.316585 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:36.316664 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:36.341952 1311248 cri.go:89] found id: ""
	I1218 00:36:36.341966 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.341973 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:36.341979 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:36.342037 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:36.365945 1311248 cri.go:89] found id: ""
	I1218 00:36:36.365959 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.365966 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:36.365974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:36.365983 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:36.426123 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:36.426142 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.444123 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:36.444140 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:36.509193 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:36.509204 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:36.509214 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:36.571649 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:36.571667 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.103696 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:39.113703 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:39.113762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:39.141856 1311248 cri.go:89] found id: ""
	I1218 00:36:39.141870 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.141878 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:39.141883 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:39.141944 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:39.170038 1311248 cri.go:89] found id: ""
	I1218 00:36:39.170052 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.170101 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:39.170107 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:39.170172 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:39.199014 1311248 cri.go:89] found id: ""
	I1218 00:36:39.199028 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.199035 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:39.199041 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:39.199101 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:39.226392 1311248 cri.go:89] found id: ""
	I1218 00:36:39.226414 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.226422 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:39.226427 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:39.226493 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:39.251905 1311248 cri.go:89] found id: ""
	I1218 00:36:39.251920 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.251927 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:39.251932 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:39.251992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:39.276915 1311248 cri.go:89] found id: ""
	I1218 00:36:39.276937 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.276944 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:39.276949 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:39.277007 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:39.301520 1311248 cri.go:89] found id: ""
	I1218 00:36:39.301534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.301542 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:39.301551 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:39.301560 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:39.364240 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:39.364259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.394082 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:39.394098 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:39.460886 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:39.460907 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:39.477258 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:39.477273 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:39.547172 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.048213 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:42.059442 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:42.059521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:42.095887 1311248 cri.go:89] found id: ""
	I1218 00:36:42.095903 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.095911 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:42.095917 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:42.095987 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:42.126738 1311248 cri.go:89] found id: ""
	I1218 00:36:42.126756 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.126763 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:42.126769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:42.126846 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:42.183895 1311248 cri.go:89] found id: ""
	I1218 00:36:42.183916 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.183924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:42.183931 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:42.184005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:42.217296 1311248 cri.go:89] found id: ""
	I1218 00:36:42.217313 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.217320 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:42.217333 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:42.217410 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:42.248021 1311248 cri.go:89] found id: ""
	I1218 00:36:42.248038 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.248065 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:42.248071 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:42.248143 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:42.278624 1311248 cri.go:89] found id: ""
	I1218 00:36:42.278650 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.278658 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:42.278664 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:42.278732 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:42.306575 1311248 cri.go:89] found id: ""
	I1218 00:36:42.306589 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.306604 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:42.306613 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:42.306622 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:42.366835 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:42.366859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:42.381793 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:42.381810 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:42.478588 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.478598 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:42.478608 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:42.541093 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:42.541114 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:45.069751 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:45.106091 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:45.106161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:45.152078 1311248 cri.go:89] found id: ""
	I1218 00:36:45.152105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.152113 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:45.152120 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:45.152202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:45.228849 1311248 cri.go:89] found id: ""
	I1218 00:36:45.228866 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.228874 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:45.228881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:45.229017 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:45.284605 1311248 cri.go:89] found id: ""
	I1218 00:36:45.284640 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.284648 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:45.284654 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:45.284773 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:45.318439 1311248 cri.go:89] found id: ""
	I1218 00:36:45.318454 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.318461 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:45.318467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:45.318532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:45.348962 1311248 cri.go:89] found id: ""
	I1218 00:36:45.348976 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.348984 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:45.348990 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:45.349055 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:45.378098 1311248 cri.go:89] found id: ""
	I1218 00:36:45.378112 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.378119 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:45.378125 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:45.378227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:45.435291 1311248 cri.go:89] found id: ""
	I1218 00:36:45.435311 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.435318 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:45.435335 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:45.435362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:45.505552 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:45.505571 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:45.523778 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:45.523794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:45.592584 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:45.592594 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:45.592606 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:45.658999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:45.659018 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:48.186749 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:48.197169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:48.197230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:48.222369 1311248 cri.go:89] found id: ""
	I1218 00:36:48.222383 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.222390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:48.222396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:48.222459 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:48.247132 1311248 cri.go:89] found id: ""
	I1218 00:36:48.247146 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.247153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:48.247158 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:48.247217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:48.272441 1311248 cri.go:89] found id: ""
	I1218 00:36:48.272455 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.272462 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:48.272467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:48.272526 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:48.302640 1311248 cri.go:89] found id: ""
	I1218 00:36:48.302655 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.302662 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:48.302679 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:48.302737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:48.329411 1311248 cri.go:89] found id: ""
	I1218 00:36:48.329425 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.329433 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:48.329438 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:48.329497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:48.358419 1311248 cri.go:89] found id: ""
	I1218 00:36:48.358433 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.358440 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:48.358445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:48.358503 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:48.383182 1311248 cri.go:89] found id: ""
	I1218 00:36:48.383195 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.383203 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:48.383210 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:48.383220 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:48.451796 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:48.451815 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:48.467080 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:48.467096 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:48.533083 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:48.533092 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:48.533103 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:48.596920 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:48.596940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:51.124756 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:51.135594 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:51.135659 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:51.164133 1311248 cri.go:89] found id: ""
	I1218 00:36:51.164148 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.164156 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:51.164161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:51.164226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:51.190200 1311248 cri.go:89] found id: ""
	I1218 00:36:51.190215 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.190222 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:51.190228 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:51.190291 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:51.216170 1311248 cri.go:89] found id: ""
	I1218 00:36:51.216187 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.216194 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:51.216200 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:51.216263 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:51.246031 1311248 cri.go:89] found id: ""
	I1218 00:36:51.246045 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.246052 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:51.246058 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:51.246122 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:51.278864 1311248 cri.go:89] found id: ""
	I1218 00:36:51.278878 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.278885 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:51.278890 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:51.278963 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:51.303118 1311248 cri.go:89] found id: ""
	I1218 00:36:51.303132 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.303139 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:51.303144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:51.303202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:51.328091 1311248 cri.go:89] found id: ""
	I1218 00:36:51.328105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.328112 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:51.328120 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:51.328130 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:51.385226 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:51.385249 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:51.400951 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:51.400967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:51.479293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:51.479304 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:51.479315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:51.541268 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:51.541288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.069293 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:54.080067 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:54.080153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:54.106375 1311248 cri.go:89] found id: ""
	I1218 00:36:54.106390 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.106402 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:54.106408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:54.106467 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:54.131767 1311248 cri.go:89] found id: ""
	I1218 00:36:54.131781 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.131788 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:54.131793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:54.131850 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:54.157519 1311248 cri.go:89] found id: ""
	I1218 00:36:54.157534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.157541 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:54.157546 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:54.157606 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:54.182381 1311248 cri.go:89] found id: ""
	I1218 00:36:54.182396 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.182403 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:54.182408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:54.182478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:54.211219 1311248 cri.go:89] found id: ""
	I1218 00:36:54.211234 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.211241 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:54.211247 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:54.211323 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:54.243605 1311248 cri.go:89] found id: ""
	I1218 00:36:54.243627 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.243634 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:54.243640 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:54.243710 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:54.268614 1311248 cri.go:89] found id: ""
	I1218 00:36:54.268648 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.268655 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:54.268664 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:54.268675 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:54.332655 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:54.332668 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:54.332679 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:54.396896 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:54.396916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.440350 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:54.440371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:54.503158 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:54.503178 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.019672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:57.030198 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:57.030268 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:57.059845 1311248 cri.go:89] found id: ""
	I1218 00:36:57.059859 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.059866 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:57.059872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:57.059939 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:57.086203 1311248 cri.go:89] found id: ""
	I1218 00:36:57.086217 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.086224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:57.086229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:57.086326 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:57.115321 1311248 cri.go:89] found id: ""
	I1218 00:36:57.115335 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.115342 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:57.115347 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:57.115416 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:57.141717 1311248 cri.go:89] found id: ""
	I1218 00:36:57.141731 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.141738 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:57.141743 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:57.141801 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:57.166376 1311248 cri.go:89] found id: ""
	I1218 00:36:57.166389 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.166396 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:57.166400 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:57.166470 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:57.194461 1311248 cri.go:89] found id: ""
	I1218 00:36:57.194475 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.194494 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:57.194500 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:57.194557 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:57.219267 1311248 cri.go:89] found id: ""
	I1218 00:36:57.219280 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.219287 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:57.219295 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:57.219305 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:57.274913 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:57.274932 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.290015 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:57.290032 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:57.353493 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:57.353504 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:57.353514 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:57.424372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:57.424400 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:59.955778 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:59.965801 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:59.965861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:59.993708 1311248 cri.go:89] found id: ""
	I1218 00:36:59.993722 1311248 logs.go:282] 0 containers: []
	W1218 00:36:59.993729 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:59.993734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:59.993792 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:00.055250 1311248 cri.go:89] found id: ""
	I1218 00:37:00.055266 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.055274 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:00.055280 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:00.055388 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:00.117792 1311248 cri.go:89] found id: ""
	I1218 00:37:00.117810 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.117818 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:00.117824 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:00.117903 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:00.170362 1311248 cri.go:89] found id: ""
	I1218 00:37:00.170378 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.170394 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:00.170401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:00.170482 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:00.229984 1311248 cri.go:89] found id: ""
	I1218 00:37:00.230002 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.230010 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:00.230015 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:00.230094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:00.264809 1311248 cri.go:89] found id: ""
	I1218 00:37:00.264826 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.264833 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:00.264839 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:00.264908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:00.313700 1311248 cri.go:89] found id: ""
	I1218 00:37:00.313718 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.313725 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:00.313734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:00.313747 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:00.390802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:00.390825 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:00.428189 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:00.428207 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:00.494729 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:00.494750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:00.511226 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:00.511245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:00.579855 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.080114 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:03.090701 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:03.090768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:03.123581 1311248 cri.go:89] found id: ""
	I1218 00:37:03.123596 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.123603 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:03.123608 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:03.123666 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:03.148602 1311248 cri.go:89] found id: ""
	I1218 00:37:03.148615 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.148657 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:03.148662 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:03.148733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:03.174826 1311248 cri.go:89] found id: ""
	I1218 00:37:03.174840 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.174848 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:03.174853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:03.174927 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:03.200912 1311248 cri.go:89] found id: ""
	I1218 00:37:03.200926 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.200933 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:03.200939 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:03.200998 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:03.226151 1311248 cri.go:89] found id: ""
	I1218 00:37:03.226166 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.226173 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:03.226179 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:03.226237 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:03.253785 1311248 cri.go:89] found id: ""
	I1218 00:37:03.253799 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.253806 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:03.253812 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:03.253878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:03.279482 1311248 cri.go:89] found id: ""
	I1218 00:37:03.279495 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.279502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:03.279510 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:03.279521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:03.294545 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:03.294563 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:03.360050 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.360059 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:03.360071 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:03.423132 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:03.423151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:03.461805 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:03.461820 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.018802 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:06.030336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:06.030406 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:06.056426 1311248 cri.go:89] found id: ""
	I1218 00:37:06.056440 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.056447 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:06.056453 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:06.056513 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:06.086319 1311248 cri.go:89] found id: ""
	I1218 00:37:06.086333 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.086341 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:06.086346 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:06.086413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:06.112062 1311248 cri.go:89] found id: ""
	I1218 00:37:06.112077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.112084 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:06.112089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:06.112157 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:06.137317 1311248 cri.go:89] found id: ""
	I1218 00:37:06.137331 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.137344 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:06.137351 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:06.137419 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:06.165090 1311248 cri.go:89] found id: ""
	I1218 00:37:06.165104 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.165111 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:06.165116 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:06.165174 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:06.190738 1311248 cri.go:89] found id: ""
	I1218 00:37:06.190753 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.190759 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:06.190765 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:06.190822 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:06.215038 1311248 cri.go:89] found id: ""
	I1218 00:37:06.215066 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.215075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:06.215083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:06.215094 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.270893 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:06.270915 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:06.285817 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:06.285834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:06.354768 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:06.354777 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:06.354787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:06.416937 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:06.416957 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:08.951149 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:08.961238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:08.961297 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:08.985900 1311248 cri.go:89] found id: ""
	I1218 00:37:08.985916 1311248 logs.go:282] 0 containers: []
	W1218 00:37:08.985923 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:08.985928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:08.985993 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:09.016022 1311248 cri.go:89] found id: ""
	I1218 00:37:09.016036 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.016043 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:09.016048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:09.016106 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:09.040820 1311248 cri.go:89] found id: ""
	I1218 00:37:09.040841 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.040849 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:09.040853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:09.040912 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:09.065452 1311248 cri.go:89] found id: ""
	I1218 00:37:09.065466 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.065473 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:09.065478 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:09.065539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:09.095062 1311248 cri.go:89] found id: ""
	I1218 00:37:09.095077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.095083 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:09.095089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:09.095151 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:09.120274 1311248 cri.go:89] found id: ""
	I1218 00:37:09.120287 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.120294 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:09.120300 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:09.120366 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:09.144652 1311248 cri.go:89] found id: ""
	I1218 00:37:09.144667 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.144674 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:09.144683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:09.144700 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:09.159355 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:09.159371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:09.224560 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:09.224571 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:09.224582 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:09.286931 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:09.286951 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:09.318873 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:09.318888 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:11.876699 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:11.887524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:11.887583 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:11.913617 1311248 cri.go:89] found id: ""
	I1218 00:37:11.913631 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.913638 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:11.913643 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:11.913701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:11.942203 1311248 cri.go:89] found id: ""
	I1218 00:37:11.942219 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.942226 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:11.942231 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:11.942292 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:11.967671 1311248 cri.go:89] found id: ""
	I1218 00:37:11.967685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.967692 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:11.967697 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:11.967766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:11.992422 1311248 cri.go:89] found id: ""
	I1218 00:37:11.992437 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.992443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:11.992448 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:11.992505 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:12.031034 1311248 cri.go:89] found id: ""
	I1218 00:37:12.031049 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.031056 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:12.031061 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:12.031119 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:12.057654 1311248 cri.go:89] found id: ""
	I1218 00:37:12.057669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.057677 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:12.057682 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:12.057764 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:12.082063 1311248 cri.go:89] found id: ""
	I1218 00:37:12.082078 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.082084 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:12.082092 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:12.082102 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:12.111103 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:12.111119 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:12.168426 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:12.168446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:12.183407 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:12.183423 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:12.251784 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:12.251803 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:12.251814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:14.823080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:14.834459 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:14.834525 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:14.860258 1311248 cri.go:89] found id: ""
	I1218 00:37:14.860272 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.860278 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:14.860283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:14.860341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:14.884703 1311248 cri.go:89] found id: ""
	I1218 00:37:14.884722 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.884729 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:14.884734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:14.884794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:14.909031 1311248 cri.go:89] found id: ""
	I1218 00:37:14.909046 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.909054 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:14.909059 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:14.909130 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:14.934504 1311248 cri.go:89] found id: ""
	I1218 00:37:14.934518 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.934525 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:14.934531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:14.934590 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:14.965623 1311248 cri.go:89] found id: ""
	I1218 00:37:14.965638 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.965646 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:14.965651 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:14.965718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:14.991607 1311248 cri.go:89] found id: ""
	I1218 00:37:14.991623 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.991631 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:14.991636 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:14.991711 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:15.027331 1311248 cri.go:89] found id: ""
	I1218 00:37:15.027347 1311248 logs.go:282] 0 containers: []
	W1218 00:37:15.027355 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:15.027364 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:15.027376 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:15.102509 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:15.102519 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:15.102530 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:15.167080 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:15.167101 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:15.200488 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:15.200504 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:15.261320 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:15.261342 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:17.777092 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:17.788005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:17.788070 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:17.813820 1311248 cri.go:89] found id: ""
	I1218 00:37:17.813834 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.813841 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:17.813846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:17.813906 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:17.841574 1311248 cri.go:89] found id: ""
	I1218 00:37:17.841588 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.841605 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:17.841610 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:17.841679 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:17.865628 1311248 cri.go:89] found id: ""
	I1218 00:37:17.865644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.865650 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:17.865656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:17.865713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:17.891259 1311248 cri.go:89] found id: ""
	I1218 00:37:17.891273 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.891289 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:17.891295 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:17.891363 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:17.918377 1311248 cri.go:89] found id: ""
	I1218 00:37:17.918391 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.918398 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:17.918403 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:17.918461 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:17.948139 1311248 cri.go:89] found id: ""
	I1218 00:37:17.948171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.948178 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:17.948183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:17.948251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:17.971855 1311248 cri.go:89] found id: ""
	I1218 00:37:17.971869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.971876 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:17.971884 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:17.971894 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:18.026594 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:18.026614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:18.042303 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:18.042328 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:18.108683 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:18.108704 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:18.108729 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:18.172657 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:18.172676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:20.704818 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:20.715060 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:20.715120 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:20.741147 1311248 cri.go:89] found id: ""
	I1218 00:37:20.741161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.741168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:20.741174 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:20.741231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:20.765846 1311248 cri.go:89] found id: ""
	I1218 00:37:20.765860 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.765867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:20.765872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:20.765930 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:20.795338 1311248 cri.go:89] found id: ""
	I1218 00:37:20.795351 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.795358 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:20.795364 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:20.795421 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:20.823054 1311248 cri.go:89] found id: ""
	I1218 00:37:20.823068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.823075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:20.823080 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:20.823137 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:20.848186 1311248 cri.go:89] found id: ""
	I1218 00:37:20.848200 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.848208 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:20.848213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:20.848278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:20.872642 1311248 cri.go:89] found id: ""
	I1218 00:37:20.872656 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.872662 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:20.872668 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:20.872771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:20.897151 1311248 cri.go:89] found id: ""
	I1218 00:37:20.897165 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.897172 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:20.897180 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:20.897190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:20.951948 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:20.951968 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:20.966927 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:20.966943 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:21.033275 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:21.033286 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:21.033296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:21.096425 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:21.096445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.624716 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:23.635084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:23.635160 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:23.668648 1311248 cri.go:89] found id: ""
	I1218 00:37:23.668662 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.668670 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:23.668675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:23.668755 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:23.700454 1311248 cri.go:89] found id: ""
	I1218 00:37:23.700468 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.700475 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:23.700480 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:23.700538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:23.732021 1311248 cri.go:89] found id: ""
	I1218 00:37:23.732035 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.732043 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:23.732048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:23.732124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:23.760854 1311248 cri.go:89] found id: ""
	I1218 00:37:23.760868 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.760875 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:23.760881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:23.760942 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:23.786164 1311248 cri.go:89] found id: ""
	I1218 00:37:23.786178 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.786185 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:23.786189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:23.786248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:23.811196 1311248 cri.go:89] found id: ""
	I1218 00:37:23.811220 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.811229 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:23.811234 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:23.811300 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:23.835282 1311248 cri.go:89] found id: ""
	I1218 00:37:23.835297 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.835314 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:23.835323 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:23.835334 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:23.899950 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:23.899970 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:23.899981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:23.966454 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:23.966474 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.994564 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:23.994580 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:24.052734 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:24.052755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.568298 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:26.578561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:26.578622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:26.602733 1311248 cri.go:89] found id: ""
	I1218 00:37:26.602747 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.602755 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:26.602761 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:26.602826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:26.631092 1311248 cri.go:89] found id: ""
	I1218 00:37:26.631106 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.631113 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:26.631118 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:26.631180 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:26.677513 1311248 cri.go:89] found id: ""
	I1218 00:37:26.677528 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.677536 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:26.677541 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:26.677608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:26.712071 1311248 cri.go:89] found id: ""
	I1218 00:37:26.712085 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.712093 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:26.712100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:26.712167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:26.738769 1311248 cri.go:89] found id: ""
	I1218 00:37:26.738783 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.738790 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:26.738795 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:26.738857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:26.764344 1311248 cri.go:89] found id: ""
	I1218 00:37:26.764358 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.764365 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:26.764370 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:26.764428 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:26.790276 1311248 cri.go:89] found id: ""
	I1218 00:37:26.790290 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.790297 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:26.790305 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:26.790315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:26.845607 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:26.845626 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.861063 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:26.861080 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:26.931574 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:26.931584 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:26.931595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:26.998426 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:26.998445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:29.540997 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:29.551044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:29.551103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:29.575146 1311248 cri.go:89] found id: ""
	I1218 00:37:29.575161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.575168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:29.575173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:29.575230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:29.599039 1311248 cri.go:89] found id: ""
	I1218 00:37:29.599052 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.599059 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:29.599064 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:29.599123 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:29.623971 1311248 cri.go:89] found id: ""
	I1218 00:37:29.623985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.623993 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:29.623998 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:29.624057 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:29.653653 1311248 cri.go:89] found id: ""
	I1218 00:37:29.653669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.653675 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:29.653681 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:29.653754 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:29.687572 1311248 cri.go:89] found id: ""
	I1218 00:37:29.687586 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.687593 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:29.687599 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:29.687670 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:29.725789 1311248 cri.go:89] found id: ""
	I1218 00:37:29.725803 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.725811 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:29.725816 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:29.725878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:29.753212 1311248 cri.go:89] found id: ""
	I1218 00:37:29.753226 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.753233 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:29.753241 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:29.753253 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:29.810976 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:29.810996 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:29.825952 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:29.825969 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:29.893717 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:29.893736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:29.893748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:29.959773 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:29.959794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:32.492460 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:32.502745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:32.502807 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:32.528416 1311248 cri.go:89] found id: ""
	I1218 00:37:32.528431 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.528438 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:32.528443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:32.528501 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:32.553770 1311248 cri.go:89] found id: ""
	I1218 00:37:32.553785 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.553792 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:32.553798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:32.553861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:32.577941 1311248 cri.go:89] found id: ""
	I1218 00:37:32.577956 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.577963 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:32.577969 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:32.578028 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:32.604043 1311248 cri.go:89] found id: ""
	I1218 00:37:32.604058 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.604075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:32.604081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:32.604159 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:32.629080 1311248 cri.go:89] found id: ""
	I1218 00:37:32.629095 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.629102 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:32.629108 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:32.629167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:32.664156 1311248 cri.go:89] found id: ""
	I1218 00:37:32.664171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.664187 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:32.664193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:32.664281 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:32.692107 1311248 cri.go:89] found id: ""
	I1218 00:37:32.692141 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.692149 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:32.692158 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:32.692168 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:32.758211 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:32.758238 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:32.774028 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:32.774047 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:32.839724 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:32.839734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:32.839749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:32.905609 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:32.905633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:35.434204 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:35.445035 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:35.445099 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:35.470531 1311248 cri.go:89] found id: ""
	I1218 00:37:35.470545 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.470553 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:35.470558 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:35.470621 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:35.494976 1311248 cri.go:89] found id: ""
	I1218 00:37:35.494990 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.494996 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:35.495001 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:35.495063 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:35.519629 1311248 cri.go:89] found id: ""
	I1218 00:37:35.519644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.519651 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:35.519656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:35.519714 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:35.544438 1311248 cri.go:89] found id: ""
	I1218 00:37:35.544453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.544460 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:35.544465 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:35.544523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:35.569684 1311248 cri.go:89] found id: ""
	I1218 00:37:35.569699 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.569706 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:35.569712 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:35.569771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:35.595541 1311248 cri.go:89] found id: ""
	I1218 00:37:35.595556 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.595563 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:35.595568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:35.595632 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:35.620307 1311248 cri.go:89] found id: ""
	I1218 00:37:35.620321 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.620328 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:35.620336 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:35.620346 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:35.678927 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:35.678945 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:35.697469 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:35.697488 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:35.774692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:35.774703 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:35.774713 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:35.836772 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:35.836792 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:38.369786 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:38.380243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:38.380304 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:38.406412 1311248 cri.go:89] found id: ""
	I1218 00:37:38.406426 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.406433 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:38.406439 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:38.406497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:38.431433 1311248 cri.go:89] found id: ""
	I1218 00:37:38.431447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.431454 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:38.431460 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:38.431518 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:38.455854 1311248 cri.go:89] found id: ""
	I1218 00:37:38.455869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.455876 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:38.455881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:38.455943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:38.480414 1311248 cri.go:89] found id: ""
	I1218 00:37:38.480428 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.480435 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:38.480440 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:38.480497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:38.506521 1311248 cri.go:89] found id: ""
	I1218 00:37:38.506535 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.506551 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:38.506557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:38.506630 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:38.531738 1311248 cri.go:89] found id: ""
	I1218 00:37:38.531762 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.531769 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:38.531774 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:38.531840 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:38.557054 1311248 cri.go:89] found id: ""
	I1218 00:37:38.557068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.557075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:38.557083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:38.557092 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:38.613102 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:38.613120 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:38.627653 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:38.627670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:38.723568 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:38.723579 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:38.723591 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:38.784988 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:38.785008 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:41.315880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:41.326378 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:41.326457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:41.351366 1311248 cri.go:89] found id: ""
	I1218 00:37:41.351381 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.351390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:41.351395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:41.351454 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:41.376110 1311248 cri.go:89] found id: ""
	I1218 00:37:41.376124 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.376131 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:41.376137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:41.376192 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:41.401062 1311248 cri.go:89] found id: ""
	I1218 00:37:41.401075 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.401082 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:41.401087 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:41.401146 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:41.425454 1311248 cri.go:89] found id: ""
	I1218 00:37:41.425469 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.425475 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:41.425481 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:41.425539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:41.454711 1311248 cri.go:89] found id: ""
	I1218 00:37:41.454724 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.454732 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:41.454737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:41.454799 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:41.479667 1311248 cri.go:89] found id: ""
	I1218 00:37:41.479681 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.479688 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:41.479694 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:41.479752 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:41.504248 1311248 cri.go:89] found id: ""
	I1218 00:37:41.504261 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.504268 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:41.504276 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:41.504323 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:41.559589 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:41.559609 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:41.574018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:41.574034 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:41.637175 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:41.637186 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:41.637196 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:41.712099 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:41.712122 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.243063 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:44.253213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:44.253272 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:44.278124 1311248 cri.go:89] found id: ""
	I1218 00:37:44.278138 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.278145 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:44.278150 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:44.278211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:44.302729 1311248 cri.go:89] found id: ""
	I1218 00:37:44.302743 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.302750 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:44.302755 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:44.302813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:44.327369 1311248 cri.go:89] found id: ""
	I1218 00:37:44.327384 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.327391 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:44.327396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:44.327458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:44.351769 1311248 cri.go:89] found id: ""
	I1218 00:37:44.351784 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.351791 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:44.351796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:44.351858 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:44.378488 1311248 cri.go:89] found id: ""
	I1218 00:37:44.378502 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.378509 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:44.378514 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:44.378574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:44.404134 1311248 cri.go:89] found id: ""
	I1218 00:37:44.404149 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.404156 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:44.404161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:44.404219 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:44.428529 1311248 cri.go:89] found id: ""
	I1218 00:37:44.428543 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.428551 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:44.428559 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:44.428570 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:44.443196 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:44.443212 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:44.505692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:44.505702 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:44.505712 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:44.571665 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:44.571686 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.600535 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:44.600553 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.157844 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:47.168414 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:47.168474 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:47.197971 1311248 cri.go:89] found id: ""
	I1218 00:37:47.197985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.197992 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:47.197997 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:47.198054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:47.223237 1311248 cri.go:89] found id: ""
	I1218 00:37:47.223251 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.223258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:47.223263 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:47.223322 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:47.251998 1311248 cri.go:89] found id: ""
	I1218 00:37:47.252018 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.252025 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:47.252031 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:47.252089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:47.275741 1311248 cri.go:89] found id: ""
	I1218 00:37:47.275755 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.275764 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:47.275769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:47.275826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:47.302583 1311248 cri.go:89] found id: ""
	I1218 00:37:47.302597 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.302604 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:47.302609 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:47.302665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:47.327501 1311248 cri.go:89] found id: ""
	I1218 00:37:47.327516 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.327523 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:47.327528 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:47.327594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:47.352433 1311248 cri.go:89] found id: ""
	I1218 00:37:47.352447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.352454 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:47.352463 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:47.352473 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.410340 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:47.410362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:47.425365 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:47.425388 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:47.492532 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:47.492542 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:47.492562 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:47.553805 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:47.553828 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.086246 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:50.097136 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:50.097206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:50.124671 1311248 cri.go:89] found id: ""
	I1218 00:37:50.124685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.124693 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:50.124698 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:50.124766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:50.150439 1311248 cri.go:89] found id: ""
	I1218 00:37:50.150453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.150460 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:50.150464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:50.150523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:50.174899 1311248 cri.go:89] found id: ""
	I1218 00:37:50.174913 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.174921 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:50.174926 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:50.174992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:50.200398 1311248 cri.go:89] found id: ""
	I1218 00:37:50.200412 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.200420 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:50.200425 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:50.200486 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:50.226325 1311248 cri.go:89] found id: ""
	I1218 00:37:50.226338 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.226345 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:50.226350 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:50.226409 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:50.251194 1311248 cri.go:89] found id: ""
	I1218 00:37:50.251208 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.251215 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:50.251220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:50.251287 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:50.278029 1311248 cri.go:89] found id: ""
	I1218 00:37:50.278043 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.278050 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:50.278057 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:50.278067 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:50.338421 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:50.338443 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.368542 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:50.368565 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:50.423715 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:50.423734 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:50.438292 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:50.438308 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:50.499550 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:52.999811 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:53.011389 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:53.011453 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:53.036842 1311248 cri.go:89] found id: ""
	I1218 00:37:53.036861 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.036869 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:53.036884 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:53.036981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:53.069368 1311248 cri.go:89] found id: ""
	I1218 00:37:53.069383 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.069391 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:53.069397 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:53.069458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:53.093990 1311248 cri.go:89] found id: ""
	I1218 00:37:53.094004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.094011 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:53.094016 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:53.094076 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:53.119386 1311248 cri.go:89] found id: ""
	I1218 00:37:53.119400 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.119417 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:53.119423 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:53.119487 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:53.144979 1311248 cri.go:89] found id: ""
	I1218 00:37:53.144992 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.144999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:53.145005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:53.145062 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:53.171485 1311248 cri.go:89] found id: ""
	I1218 00:37:53.171499 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.171506 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:53.171512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:53.171570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:53.198517 1311248 cri.go:89] found id: ""
	I1218 00:37:53.198530 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.198537 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:53.198545 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:53.198556 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:53.225701 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:53.225719 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:53.280281 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:53.280300 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:53.295217 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:53.295235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:53.360920 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:53.360930 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:53.360940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:55.923673 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:55.935823 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:55.935880 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:55.963196 1311248 cri.go:89] found id: ""
	I1218 00:37:55.963210 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.963217 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:55.963222 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:55.963278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:55.992688 1311248 cri.go:89] found id: ""
	I1218 00:37:55.992701 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.992708 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:55.992713 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:55.992778 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:56.032683 1311248 cri.go:89] found id: ""
	I1218 00:37:56.032696 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.032705 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:56.032711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:56.032779 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:56.061554 1311248 cri.go:89] found id: ""
	I1218 00:37:56.061568 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.061575 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:56.061580 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:56.061639 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:56.090855 1311248 cri.go:89] found id: ""
	I1218 00:37:56.090869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.090877 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:56.090882 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:56.090943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:56.115990 1311248 cri.go:89] found id: ""
	I1218 00:37:56.116004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.116020 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:56.116026 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:56.116085 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:56.141361 1311248 cri.go:89] found id: ""
	I1218 00:37:56.141385 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.141393 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:56.141401 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:56.141412 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:56.202998 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:56.203008 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:56.203019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:56.263974 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:56.263994 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:56.295494 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:56.295509 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:56.350431 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:56.350450 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:58.867454 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:58.877799 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:58.877861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:58.929615 1311248 cri.go:89] found id: ""
	I1218 00:37:58.929629 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.929636 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:58.929642 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:58.929701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:58.958880 1311248 cri.go:89] found id: ""
	I1218 00:37:58.958894 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.958900 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:58.958906 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:58.958965 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:58.983460 1311248 cri.go:89] found id: ""
	I1218 00:37:58.983475 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.983482 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:58.983487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:58.983547 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:59.009476 1311248 cri.go:89] found id: ""
	I1218 00:37:59.009490 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.009497 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:59.009503 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:59.009563 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:59.033436 1311248 cri.go:89] found id: ""
	I1218 00:37:59.033450 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.033457 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:59.033462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:59.033522 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:59.058635 1311248 cri.go:89] found id: ""
	I1218 00:37:59.058649 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.058656 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:59.058661 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:59.058719 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:59.082644 1311248 cri.go:89] found id: ""
	I1218 00:37:59.082658 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.082666 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:59.082673 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:59.082684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:59.138067 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:59.138085 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:59.154868 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:59.154884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:59.232032 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:59.232043 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:59.232061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:59.297264 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:59.297288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:01.827672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:01.838270 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:01.838330 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:01.862836 1311248 cri.go:89] found id: ""
	I1218 00:38:01.862855 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.862862 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:01.862867 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:01.862925 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:01.892782 1311248 cri.go:89] found id: ""
	I1218 00:38:01.892797 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.892804 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:01.892810 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:01.892876 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:01.919043 1311248 cri.go:89] found id: ""
	I1218 00:38:01.919068 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.919076 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:01.919081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:01.919148 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:01.945252 1311248 cri.go:89] found id: ""
	I1218 00:38:01.945267 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.945285 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:01.945291 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:01.945368 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:01.974338 1311248 cri.go:89] found id: ""
	I1218 00:38:01.974353 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.974361 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:01.974366 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:01.974433 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:02.003307 1311248 cri.go:89] found id: ""
	I1218 00:38:02.003324 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.003332 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:02.003339 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:02.003423 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:02.030938 1311248 cri.go:89] found id: ""
	I1218 00:38:02.030953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.030960 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:02.030968 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:02.030979 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:02.100511 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:02.100521 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:02.100531 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:02.162112 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:02.162132 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:02.191957 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:02.191976 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:02.248095 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:02.248116 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:04.765008 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:04.775100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:04.775168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:04.799097 1311248 cri.go:89] found id: ""
	I1218 00:38:04.799125 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.799132 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:04.799137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:04.799206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:04.826968 1311248 cri.go:89] found id: ""
	I1218 00:38:04.826993 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.827000 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:04.827005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:04.827083 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:04.860005 1311248 cri.go:89] found id: ""
	I1218 00:38:04.860020 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.860027 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:04.860032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:04.860103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:04.886293 1311248 cri.go:89] found id: ""
	I1218 00:38:04.886307 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.886315 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:04.886320 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:04.886385 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:04.918579 1311248 cri.go:89] found id: ""
	I1218 00:38:04.918594 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.918601 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:04.918607 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:04.918676 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:04.945152 1311248 cri.go:89] found id: ""
	I1218 00:38:04.945167 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.945183 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:04.945189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:04.945258 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:04.976410 1311248 cri.go:89] found id: ""
	I1218 00:38:04.976424 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.976432 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:04.976439 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:04.976449 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:05.032080 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:05.032100 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:05.047379 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:05.047396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:05.113965 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:05.113975 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:05.113986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:05.174878 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:05.174897 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:07.706926 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:07.717077 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:07.717140 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:07.741430 1311248 cri.go:89] found id: ""
	I1218 00:38:07.741464 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.741471 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:07.741477 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:07.741538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:07.766770 1311248 cri.go:89] found id: ""
	I1218 00:38:07.766784 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.766791 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:07.766796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:07.766855 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:07.790902 1311248 cri.go:89] found id: ""
	I1218 00:38:07.790917 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.790924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:07.790929 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:07.791005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:07.819681 1311248 cri.go:89] found id: ""
	I1218 00:38:07.819696 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.819703 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:07.819708 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:07.819770 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:07.844498 1311248 cri.go:89] found id: ""
	I1218 00:38:07.844512 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.844519 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:07.844524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:07.844584 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:07.870028 1311248 cri.go:89] found id: ""
	I1218 00:38:07.870043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.870050 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:07.870057 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:07.870125 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:07.906969 1311248 cri.go:89] found id: ""
	I1218 00:38:07.906984 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.906999 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:07.907007 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:07.907017 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:07.974278 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:07.974306 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:07.989533 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:07.989551 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:08.055867 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:08.055877 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:08.055889 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:08.118669 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:08.118693 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:10.651292 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:10.663394 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:10.663471 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:10.687520 1311248 cri.go:89] found id: ""
	I1218 00:38:10.687534 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.687542 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:10.687547 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:10.687608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:10.713147 1311248 cri.go:89] found id: ""
	I1218 00:38:10.713161 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.713168 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:10.713173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:10.713231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:10.737926 1311248 cri.go:89] found id: ""
	I1218 00:38:10.737940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.737948 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:10.737953 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:10.738012 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:10.763422 1311248 cri.go:89] found id: ""
	I1218 00:38:10.763436 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.763443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:10.763449 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:10.763508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:10.788619 1311248 cri.go:89] found id: ""
	I1218 00:38:10.788659 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.788672 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:10.788677 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:10.788738 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:10.813718 1311248 cri.go:89] found id: ""
	I1218 00:38:10.813732 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.813740 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:10.813745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:10.813803 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:10.837575 1311248 cri.go:89] found id: ""
	I1218 00:38:10.837588 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.837595 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:10.837603 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:10.837614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:10.852133 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:10.852149 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:10.917780 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:10.917791 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:10.917801 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:10.987674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:10.987695 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:11.024530 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:11.024549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.581947 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:13.592491 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:13.592556 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:13.617579 1311248 cri.go:89] found id: ""
	I1218 00:38:13.617593 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.617600 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:13.617605 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:13.617665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:13.641975 1311248 cri.go:89] found id: ""
	I1218 00:38:13.641990 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.641997 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:13.642002 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:13.642060 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:13.667128 1311248 cri.go:89] found id: ""
	I1218 00:38:13.667142 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.667149 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:13.667154 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:13.667215 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:13.699564 1311248 cri.go:89] found id: ""
	I1218 00:38:13.699579 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.699586 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:13.699591 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:13.699655 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:13.727620 1311248 cri.go:89] found id: ""
	I1218 00:38:13.727634 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.727641 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:13.727646 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:13.727703 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:13.756118 1311248 cri.go:89] found id: ""
	I1218 00:38:13.756132 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.756138 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:13.756144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:13.756204 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:13.780706 1311248 cri.go:89] found id: ""
	I1218 00:38:13.780720 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.780728 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:13.780736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:13.780746 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:13.842845 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:13.842864 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:13.871826 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:13.871843 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.932300 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:13.932319 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:13.950089 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:13.950106 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:14.022114 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.522391 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:16.534271 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:16.534357 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:16.558729 1311248 cri.go:89] found id: ""
	I1218 00:38:16.558743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.558757 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:16.558762 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:16.558819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:16.587758 1311248 cri.go:89] found id: ""
	I1218 00:38:16.587772 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.587779 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:16.587784 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:16.587841 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:16.612793 1311248 cri.go:89] found id: ""
	I1218 00:38:16.612807 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.612814 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:16.612819 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:16.612907 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:16.637417 1311248 cri.go:89] found id: ""
	I1218 00:38:16.637431 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.637438 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:16.637443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:16.637508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:16.662059 1311248 cri.go:89] found id: ""
	I1218 00:38:16.662073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.662080 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:16.662085 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:16.662141 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:16.686710 1311248 cri.go:89] found id: ""
	I1218 00:38:16.686724 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.686731 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:16.686737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:16.686794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:16.711539 1311248 cri.go:89] found id: ""
	I1218 00:38:16.711553 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.711561 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:16.711569 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:16.711579 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:16.739136 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:16.739151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:16.794672 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:16.794694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:16.809147 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:16.809171 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:16.878702 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.878711 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:16.878723 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.444575 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:19.454827 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:19.454887 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:19.482057 1311248 cri.go:89] found id: ""
	I1218 00:38:19.482071 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.482078 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:19.482083 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:19.482142 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:19.505124 1311248 cri.go:89] found id: ""
	I1218 00:38:19.505138 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.505146 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:19.505151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:19.505209 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:19.530010 1311248 cri.go:89] found id: ""
	I1218 00:38:19.530024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.530031 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:19.530037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:19.530094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:19.555994 1311248 cri.go:89] found id: ""
	I1218 00:38:19.556008 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.556025 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:19.556030 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:19.556087 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:19.580515 1311248 cri.go:89] found id: ""
	I1218 00:38:19.580539 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.580546 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:19.580554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:19.580619 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:19.605333 1311248 cri.go:89] found id: ""
	I1218 00:38:19.605348 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.605354 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:19.605360 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:19.605418 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:19.630483 1311248 cri.go:89] found id: ""
	I1218 00:38:19.630497 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.630504 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:19.630512 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:19.630522 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:19.693128 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:19.693138 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:19.693148 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.755570 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:19.755590 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:19.785139 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:19.785156 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:19.842579 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:19.842605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.358338 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:22.368724 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:22.368793 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:22.392394 1311248 cri.go:89] found id: ""
	I1218 00:38:22.392408 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.392415 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:22.392420 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:22.392478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:22.419029 1311248 cri.go:89] found id: ""
	I1218 00:38:22.419043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.419050 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:22.419055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:22.419117 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:22.443838 1311248 cri.go:89] found id: ""
	I1218 00:38:22.443852 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.443859 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:22.443864 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:22.443923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:22.467780 1311248 cri.go:89] found id: ""
	I1218 00:38:22.467794 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.467801 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:22.467807 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:22.467864 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:22.497254 1311248 cri.go:89] found id: ""
	I1218 00:38:22.497268 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.497276 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:22.497281 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:22.497340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:22.521672 1311248 cri.go:89] found id: ""
	I1218 00:38:22.521686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.521693 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:22.521699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:22.521758 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:22.548085 1311248 cri.go:89] found id: ""
	I1218 00:38:22.548119 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.548126 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:22.548134 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:22.548144 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:22.614828 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:22.614852 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:22.643447 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:22.643462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:22.698947 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:22.698967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.713971 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:22.713986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:22.789955 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.290158 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:25.300164 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:25.300226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:25.323897 1311248 cri.go:89] found id: ""
	I1218 00:38:25.323912 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.323919 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:25.323924 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:25.323985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:25.352232 1311248 cri.go:89] found id: ""
	I1218 00:38:25.352245 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.352252 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:25.352257 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:25.352314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:25.376749 1311248 cri.go:89] found id: ""
	I1218 00:38:25.376785 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.376792 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:25.376797 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:25.376868 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:25.401002 1311248 cri.go:89] found id: ""
	I1218 00:38:25.401015 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.401023 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:25.401028 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:25.401089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:25.426497 1311248 cri.go:89] found id: ""
	I1218 00:38:25.426510 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.426517 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:25.426522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:25.426579 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:25.450505 1311248 cri.go:89] found id: ""
	I1218 00:38:25.450518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.450525 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:25.450536 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:25.450593 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:25.478999 1311248 cri.go:89] found id: ""
	I1218 00:38:25.479013 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.479029 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:25.479037 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:25.479048 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:25.540968 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.540977 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:25.540987 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:25.601527 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:25.601546 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:25.633804 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:25.633826 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:25.691056 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:25.691076 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.206639 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:28.217134 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:28.217198 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:28.242357 1311248 cri.go:89] found id: ""
	I1218 00:38:28.242372 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.242378 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:28.242384 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:28.242449 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:28.271155 1311248 cri.go:89] found id: ""
	I1218 00:38:28.271169 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.271176 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:28.271181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:28.271242 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:28.296330 1311248 cri.go:89] found id: ""
	I1218 00:38:28.296345 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.296352 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:28.296357 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:28.296413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:28.320425 1311248 cri.go:89] found id: ""
	I1218 00:38:28.320449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.320456 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:28.320461 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:28.320528 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:28.345590 1311248 cri.go:89] found id: ""
	I1218 00:38:28.345603 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.345610 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:28.345625 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:28.345688 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:28.374296 1311248 cri.go:89] found id: ""
	I1218 00:38:28.374310 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.374334 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:28.374340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:28.374407 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:28.397991 1311248 cri.go:89] found id: ""
	I1218 00:38:28.398006 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.398014 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:28.398023 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:28.398033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:28.453794 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:28.453812 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.468531 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:28.468547 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:28.536754 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:28.536784 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:28.536796 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:28.599155 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:28.599174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:31.143176 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:31.156254 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:31.156313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:31.185437 1311248 cri.go:89] found id: ""
	I1218 00:38:31.185452 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.185460 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:31.185472 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:31.185531 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:31.215130 1311248 cri.go:89] found id: ""
	I1218 00:38:31.215144 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.215153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:31.215157 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:31.215217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:31.240144 1311248 cri.go:89] found id: ""
	I1218 00:38:31.240157 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.240164 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:31.240169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:31.240227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:31.265058 1311248 cri.go:89] found id: ""
	I1218 00:38:31.265072 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.265079 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:31.265084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:31.265150 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:31.289354 1311248 cri.go:89] found id: ""
	I1218 00:38:31.289368 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.289375 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:31.289380 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:31.289438 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:31.319744 1311248 cri.go:89] found id: ""
	I1218 00:38:31.319758 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.319766 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:31.319771 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:31.319826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:31.343739 1311248 cri.go:89] found id: ""
	I1218 00:38:31.343753 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.343760 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:31.343768 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:31.343778 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:31.399267 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:31.399287 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:31.413578 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:31.413595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:31.478705 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:31.478714 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:31.478724 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:31.540680 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:31.540703 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.068816 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:34.079525 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:34.079589 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:34.106415 1311248 cri.go:89] found id: ""
	I1218 00:38:34.106432 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.106440 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:34.106445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:34.106506 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:34.131181 1311248 cri.go:89] found id: ""
	I1218 00:38:34.131195 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.131202 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:34.131208 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:34.131265 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:34.166885 1311248 cri.go:89] found id: ""
	I1218 00:38:34.166898 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.166906 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:34.166911 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:34.166970 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:34.197771 1311248 cri.go:89] found id: ""
	I1218 00:38:34.197786 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.197793 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:34.197798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:34.197856 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:34.226531 1311248 cri.go:89] found id: ""
	I1218 00:38:34.226546 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.226552 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:34.226557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:34.226614 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:34.252100 1311248 cri.go:89] found id: ""
	I1218 00:38:34.252114 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.252121 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:34.252127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:34.252185 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:34.278653 1311248 cri.go:89] found id: ""
	I1218 00:38:34.278667 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.278675 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:34.278683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:34.278694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:34.293444 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:34.293463 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:34.359201 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:34.359211 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:34.359221 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:34.420750 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:34.420773 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.449621 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:34.449637 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.006206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:37.019401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:37.019472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:37.047646 1311248 cri.go:89] found id: ""
	I1218 00:38:37.047660 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.047667 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:37.047673 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:37.047733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:37.076612 1311248 cri.go:89] found id: ""
	I1218 00:38:37.076646 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.076653 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:37.076658 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:37.076717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:37.102368 1311248 cri.go:89] found id: ""
	I1218 00:38:37.102383 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.102390 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:37.102395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:37.102452 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:37.126829 1311248 cri.go:89] found id: ""
	I1218 00:38:37.126843 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.126850 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:37.126855 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:37.126913 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:37.159965 1311248 cri.go:89] found id: ""
	I1218 00:38:37.159980 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.159987 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:37.159992 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:37.160048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:37.193535 1311248 cri.go:89] found id: ""
	I1218 00:38:37.193549 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.193558 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:37.193564 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:37.193622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:37.224708 1311248 cri.go:89] found id: ""
	I1218 00:38:37.224723 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.224730 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:37.224738 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:37.224749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:37.287765 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:37.287775 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:37.287787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:37.349218 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:37.349239 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:37.377886 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:37.377902 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.435205 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:37.435224 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:39.950327 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:39.960885 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:39.960948 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:39.985573 1311248 cri.go:89] found id: ""
	I1218 00:38:39.985587 1311248 logs.go:282] 0 containers: []
	W1218 00:38:39.985596 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:39.985602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:39.985662 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:40.020843 1311248 cri.go:89] found id: ""
	I1218 00:38:40.020859 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.020867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:40.020873 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:40.020949 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:40.067991 1311248 cri.go:89] found id: ""
	I1218 00:38:40.068007 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.068015 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:40.068021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:40.068096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:40.097024 1311248 cri.go:89] found id: ""
	I1218 00:38:40.097039 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.097047 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:40.097053 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:40.097118 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:40.127502 1311248 cri.go:89] found id: ""
	I1218 00:38:40.127518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.127526 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:40.127531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:40.127595 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:40.165566 1311248 cri.go:89] found id: ""
	I1218 00:38:40.165580 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.165587 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:40.165593 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:40.165660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:40.204927 1311248 cri.go:89] found id: ""
	I1218 00:38:40.204940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.204948 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:40.204956 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:40.204967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:40.222297 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:40.222314 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:40.292382 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:40.292392 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:40.292403 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:40.353852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:40.353871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:40.385828 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:40.385844 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:42.942427 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:42.952937 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:42.952996 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:42.982184 1311248 cri.go:89] found id: ""
	I1218 00:38:42.982201 1311248 logs.go:282] 0 containers: []
	W1218 00:38:42.982208 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:42.982213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:42.982271 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:43.009928 1311248 cri.go:89] found id: ""
	I1218 00:38:43.009944 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.009952 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:43.009957 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:43.010021 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:43.036384 1311248 cri.go:89] found id: ""
	I1218 00:38:43.036397 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.036405 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:43.036410 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:43.036472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:43.061945 1311248 cri.go:89] found id: ""
	I1218 00:38:43.061959 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.061967 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:43.061972 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:43.062030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:43.087977 1311248 cri.go:89] found id: ""
	I1218 00:38:43.087992 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.087999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:43.088005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:43.088069 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:43.113297 1311248 cri.go:89] found id: ""
	I1218 00:38:43.113312 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.113319 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:43.113324 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:43.113390 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:43.148378 1311248 cri.go:89] found id: ""
	I1218 00:38:43.148392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.148399 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:43.148408 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:43.148419 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:43.218202 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:43.218227 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:43.234424 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:43.234441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:43.295849 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:43.295860 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:43.295871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:43.357903 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:43.357924 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:45.889646 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:45.899918 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:45.899981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:45.923610 1311248 cri.go:89] found id: ""
	I1218 00:38:45.923623 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.923630 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:45.923635 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:45.923696 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:45.949282 1311248 cri.go:89] found id: ""
	I1218 00:38:45.949296 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.949304 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:45.949309 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:45.949371 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:45.974071 1311248 cri.go:89] found id: ""
	I1218 00:38:45.974085 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.974092 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:45.974097 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:45.974153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:45.997865 1311248 cri.go:89] found id: ""
	I1218 00:38:45.997880 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.997887 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:45.997892 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:45.997953 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:46.026399 1311248 cri.go:89] found id: ""
	I1218 00:38:46.026413 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.026426 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:46.026432 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:46.026490 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:46.060011 1311248 cri.go:89] found id: ""
	I1218 00:38:46.060026 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.060033 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:46.060038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:46.060097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:46.095378 1311248 cri.go:89] found id: ""
	I1218 00:38:46.095392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.095398 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:46.095407 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:46.095418 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:46.110828 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:46.110845 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:46.194637 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:46.194647 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:46.194657 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:46.265968 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:46.265989 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:46.298428 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:46.298444 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:48.855794 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:48.868391 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:48.868457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:48.898010 1311248 cri.go:89] found id: ""
	I1218 00:38:48.898024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.898032 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:48.898037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:48.898097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:48.926962 1311248 cri.go:89] found id: ""
	I1218 00:38:48.926976 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.926984 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:48.926989 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:48.927046 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:48.953073 1311248 cri.go:89] found id: ""
	I1218 00:38:48.953096 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.953104 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:48.953109 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:48.953171 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:48.978527 1311248 cri.go:89] found id: ""
	I1218 00:38:48.978542 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.978548 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:48.978554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:48.978611 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:49.005774 1311248 cri.go:89] found id: ""
	I1218 00:38:49.005791 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.005800 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:49.005805 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:49.005881 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:49.032714 1311248 cri.go:89] found id: ""
	I1218 00:38:49.032743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.032751 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:49.032756 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:49.032845 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:49.058437 1311248 cri.go:89] found id: ""
	I1218 00:38:49.058451 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.058459 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:49.058468 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:49.058478 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:49.114793 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:49.114813 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:49.129898 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:49.129916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:49.218168 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:49.218179 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:49.218190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:49.289574 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:49.289595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:51.822637 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:51.833100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:51.833161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:51.858494 1311248 cri.go:89] found id: ""
	I1218 00:38:51.858508 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.858515 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:51.858520 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:51.858609 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:51.883202 1311248 cri.go:89] found id: ""
	I1218 00:38:51.883217 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.883224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:51.883229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:51.883286 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:51.911732 1311248 cri.go:89] found id: ""
	I1218 00:38:51.911746 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.911753 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:51.911758 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:51.911813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:51.937059 1311248 cri.go:89] found id: ""
	I1218 00:38:51.937073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.937080 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:51.937086 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:51.937144 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:51.960983 1311248 cri.go:89] found id: ""
	I1218 00:38:51.960998 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.961016 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:51.961021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:51.961095 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:51.985889 1311248 cri.go:89] found id: ""
	I1218 00:38:51.985904 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.985911 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:51.985916 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:51.985976 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:52.012132 1311248 cri.go:89] found id: ""
	I1218 00:38:52.012147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:52.012155 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:52.012163 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:52.012174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:52.080718 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:52.080736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:52.080748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:52.144427 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:52.144446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:52.176847 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:52.176869 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:52.239307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:52.239325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:54.754340 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:54.764793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:54.764857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:54.794012 1311248 cri.go:89] found id: ""
	I1218 00:38:54.794027 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.794034 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:54.794039 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:54.794096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:54.823133 1311248 cri.go:89] found id: ""
	I1218 00:38:54.823147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.823155 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:54.823160 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:54.823216 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:54.847977 1311248 cri.go:89] found id: ""
	I1218 00:38:54.847991 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.847998 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:54.848003 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:54.848064 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:54.873449 1311248 cri.go:89] found id: ""
	I1218 00:38:54.873462 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.873469 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:54.873475 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:54.873532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:54.897891 1311248 cri.go:89] found id: ""
	I1218 00:38:54.897905 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.897922 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:54.897928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:54.897985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:54.922432 1311248 cri.go:89] found id: ""
	I1218 00:38:54.922449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.922456 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:54.922462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:54.922520 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:54.947869 1311248 cri.go:89] found id: ""
	I1218 00:38:54.947884 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.947908 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:54.947916 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:54.947927 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:55.005409 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:55.005434 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:55.026491 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:55.026508 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:55.094641 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:55.094652 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:55.094663 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:55.159462 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:55.159481 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.695023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:57.706079 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:57.706147 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:57.735083 1311248 cri.go:89] found id: ""
	I1218 00:38:57.735106 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.735114 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:57.735119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:57.735178 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:57.762228 1311248 cri.go:89] found id: ""
	I1218 00:38:57.762242 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.762249 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:57.762255 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:57.762313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:57.787211 1311248 cri.go:89] found id: ""
	I1218 00:38:57.787226 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.787233 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:57.787238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:57.787303 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:57.812671 1311248 cri.go:89] found id: ""
	I1218 00:38:57.812686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.812693 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:57.812699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:57.812762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:57.840939 1311248 cri.go:89] found id: ""
	I1218 00:38:57.840953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.840961 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:57.840966 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:57.841031 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:57.867148 1311248 cri.go:89] found id: ""
	I1218 00:38:57.867163 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.867170 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:57.867175 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:57.867232 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:57.891633 1311248 cri.go:89] found id: ""
	I1218 00:38:57.891648 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.891665 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:57.891674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:57.891684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.918896 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:57.918913 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:57.975605 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:57.975625 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:57.990660 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:57.990676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:58.063038 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:58.063048 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:58.063061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.627359 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:00.638675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:00.638768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:00.669731 1311248 cri.go:89] found id: ""
	I1218 00:39:00.669745 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.669752 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:00.669757 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:00.669824 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:00.697124 1311248 cri.go:89] found id: ""
	I1218 00:39:00.697138 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.697145 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:00.697151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:00.697211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:00.722455 1311248 cri.go:89] found id: ""
	I1218 00:39:00.722469 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.722476 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:00.722486 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:00.722545 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:00.750996 1311248 cri.go:89] found id: ""
	I1218 00:39:00.751010 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.751018 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:00.751023 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:00.751091 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:00.780012 1311248 cri.go:89] found id: ""
	I1218 00:39:00.780026 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.780033 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:00.780038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:00.780105 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:00.807119 1311248 cri.go:89] found id: ""
	I1218 00:39:00.807133 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.807140 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:00.807145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:00.807213 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:00.836658 1311248 cri.go:89] found id: ""
	I1218 00:39:00.836673 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.836681 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:00.836689 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:00.836699 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:00.851616 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:00.851633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:00.919909 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:00.919918 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:00.919929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.985802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:00.985823 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:01.017691 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:01.017707 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.574413 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:03.585024 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:03.585088 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:03.615721 1311248 cri.go:89] found id: ""
	I1218 00:39:03.615735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.615742 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:03.615748 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:03.615811 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:03.641216 1311248 cri.go:89] found id: ""
	I1218 00:39:03.641230 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.641237 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:03.641243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:03.641307 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:03.665604 1311248 cri.go:89] found id: ""
	I1218 00:39:03.665618 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.665625 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:03.665639 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:03.665717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:03.690936 1311248 cri.go:89] found id: ""
	I1218 00:39:03.690951 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.690958 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:03.690970 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:03.691030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:03.716763 1311248 cri.go:89] found id: ""
	I1218 00:39:03.716794 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.716806 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:03.716811 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:03.716898 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:03.742156 1311248 cri.go:89] found id: ""
	I1218 00:39:03.742170 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.742177 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:03.742183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:03.742240 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:03.771205 1311248 cri.go:89] found id: ""
	I1218 00:39:03.771220 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.771227 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:03.771235 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:03.771245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:03.834106 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:03.834127 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:03.863112 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:03.863129 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.919444 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:03.919465 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:03.934588 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:03.934607 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:04.000293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.500788 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:06.511530 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:06.511596 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:06.536538 1311248 cri.go:89] found id: ""
	I1218 00:39:06.536554 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.536562 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:06.536568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:06.536651 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:06.565199 1311248 cri.go:89] found id: ""
	I1218 00:39:06.565213 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.565219 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:06.565224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:06.565283 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:06.589614 1311248 cri.go:89] found id: ""
	I1218 00:39:06.589628 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.589636 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:06.589641 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:06.589700 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:06.614004 1311248 cri.go:89] found id: ""
	I1218 00:39:06.614019 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.614027 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:06.614032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:06.614093 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:06.638819 1311248 cri.go:89] found id: ""
	I1218 00:39:06.638833 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.638841 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:06.638846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:06.638908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:06.666620 1311248 cri.go:89] found id: ""
	I1218 00:39:06.666634 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.666643 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:06.666648 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:06.666707 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:06.694192 1311248 cri.go:89] found id: ""
	I1218 00:39:06.694207 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.694216 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:06.694224 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:06.694235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:06.709318 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:06.709336 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:06.773553 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.773564 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:06.773587 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:06.842917 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:06.842937 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:06.877280 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:06.877296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.433923 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:09.445181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:09.445248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:09.470100 1311248 cri.go:89] found id: ""
	I1218 00:39:09.470115 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.470122 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:09.470127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:09.470184 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:09.499949 1311248 cri.go:89] found id: ""
	I1218 00:39:09.499964 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.499973 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:09.499978 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:09.500044 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:09.526313 1311248 cri.go:89] found id: ""
	I1218 00:39:09.526328 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.526335 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:09.526340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:09.526404 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:09.551831 1311248 cri.go:89] found id: ""
	I1218 00:39:09.551844 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.551851 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:09.551857 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:09.551923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:09.577535 1311248 cri.go:89] found id: ""
	I1218 00:39:09.577549 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.577557 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:09.577561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:09.577622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:09.602570 1311248 cri.go:89] found id: ""
	I1218 00:39:09.602584 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.602591 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:09.602597 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:09.602658 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:09.630715 1311248 cri.go:89] found id: ""
	I1218 00:39:09.630729 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.630736 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:09.630745 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:09.630755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.686840 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:09.686859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:09.703315 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:09.703331 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:09.770650 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:09.770660 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:09.770670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:09.832439 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:09.832457 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:12.361961 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:12.372127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:12.372190 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:12.408061 1311248 cri.go:89] found id: ""
	I1218 00:39:12.408075 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.408082 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:12.408088 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:12.408145 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:12.434860 1311248 cri.go:89] found id: ""
	I1218 00:39:12.434874 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.434881 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:12.434886 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:12.434946 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:12.465255 1311248 cri.go:89] found id: ""
	I1218 00:39:12.465270 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.465278 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:12.465283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:12.465341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:12.494330 1311248 cri.go:89] found id: ""
	I1218 00:39:12.494344 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.494350 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:12.494356 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:12.494420 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:12.518885 1311248 cri.go:89] found id: ""
	I1218 00:39:12.518900 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.518907 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:12.518912 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:12.518973 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:12.543549 1311248 cri.go:89] found id: ""
	I1218 00:39:12.543564 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.543573 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:12.543578 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:12.543641 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:12.568469 1311248 cri.go:89] found id: ""
	I1218 00:39:12.568483 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.568500 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:12.568507 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:12.568519 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:12.624017 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:12.624039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:12.639011 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:12.639028 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:12.703723 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:12.703734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:12.703744 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:12.765331 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:12.765350 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.294913 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:15.308145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:15.308210 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:15.340203 1311248 cri.go:89] found id: ""
	I1218 00:39:15.340218 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.340225 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:15.340230 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:15.340289 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:15.367732 1311248 cri.go:89] found id: ""
	I1218 00:39:15.367747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.367754 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:15.367760 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:15.367818 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:15.398027 1311248 cri.go:89] found id: ""
	I1218 00:39:15.398042 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.398049 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:15.398055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:15.398115 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:15.430352 1311248 cri.go:89] found id: ""
	I1218 00:39:15.430366 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.430373 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:15.430379 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:15.430442 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:15.461268 1311248 cri.go:89] found id: ""
	I1218 00:39:15.461283 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.461291 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:15.461297 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:15.461361 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:15.487656 1311248 cri.go:89] found id: ""
	I1218 00:39:15.487671 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.487678 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:15.487684 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:15.487744 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:15.516835 1311248 cri.go:89] found id: ""
	I1218 00:39:15.516850 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.516858 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:15.516867 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:15.516877 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:15.584348 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:15.584357 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:15.584377 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:15.646829 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:15.646849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.675913 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:15.675929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:15.731421 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:15.731441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.246605 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:18.257277 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:18.257340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:18.282497 1311248 cri.go:89] found id: ""
	I1218 00:39:18.282512 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.282519 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:18.282527 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:18.282594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:18.317178 1311248 cri.go:89] found id: ""
	I1218 00:39:18.317193 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.317200 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:18.317205 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:18.317267 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:18.342018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.342032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.342039 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:18.342044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:18.342098 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:18.366018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.366032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.366040 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:18.366045 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:18.366107 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:18.390880 1311248 cri.go:89] found id: ""
	I1218 00:39:18.390894 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.390902 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:18.390908 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:18.390968 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:18.427152 1311248 cri.go:89] found id: ""
	I1218 00:39:18.427167 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.427174 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:18.427181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:18.427241 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:18.458481 1311248 cri.go:89] found id: ""
	I1218 00:39:18.458495 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.458502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:18.458510 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:18.458521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:18.486379 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:18.486397 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:18.546371 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:18.546396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.561410 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:18.561431 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:18.625094 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:18.625105 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:18.625118 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.187071 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:21.197777 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:21.197842 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:21.228457 1311248 cri.go:89] found id: ""
	I1218 00:39:21.228472 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.228479 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:21.228485 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:21.228551 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:21.254227 1311248 cri.go:89] found id: ""
	I1218 00:39:21.254240 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.254258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:21.254264 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:21.254321 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:21.283166 1311248 cri.go:89] found id: ""
	I1218 00:39:21.283180 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.283187 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:21.283193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:21.283259 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:21.307940 1311248 cri.go:89] found id: ""
	I1218 00:39:21.307954 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.307962 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:21.307967 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:21.308022 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:21.333576 1311248 cri.go:89] found id: ""
	I1218 00:39:21.333590 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.333597 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:21.333602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:21.333660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:21.357404 1311248 cri.go:89] found id: ""
	I1218 00:39:21.357418 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.357425 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:21.357430 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:21.357488 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:21.386789 1311248 cri.go:89] found id: ""
	I1218 00:39:21.386803 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.386811 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:21.386819 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:21.386830 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:21.467813 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:21.467824 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:21.467834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.529999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:21.530019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:21.561213 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:21.561228 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:21.619110 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:21.619128 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.133884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:24.144224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:24.144298 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:24.169895 1311248 cri.go:89] found id: ""
	I1218 00:39:24.169909 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.169916 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:24.169922 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:24.169981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:24.196376 1311248 cri.go:89] found id: ""
	I1218 00:39:24.196390 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.196396 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:24.196401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:24.196464 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:24.220959 1311248 cri.go:89] found id: ""
	I1218 00:39:24.220978 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.220986 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:24.220991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:24.221051 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:24.246721 1311248 cri.go:89] found id: ""
	I1218 00:39:24.246735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.246745 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:24.246751 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:24.246819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:24.271380 1311248 cri.go:89] found id: ""
	I1218 00:39:24.271394 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.271401 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:24.271406 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:24.271466 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:24.298631 1311248 cri.go:89] found id: ""
	I1218 00:39:24.298645 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.298652 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:24.298657 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:24.298713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:24.322933 1311248 cri.go:89] found id: ""
	I1218 00:39:24.322947 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.322965 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:24.322974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:24.322984 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:24.378307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:24.378325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.395279 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:24.395296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:24.478731 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:24.478740 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:24.478750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:24.539558 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:24.539578 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.069527 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:27.079511 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:27.079570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:27.104730 1311248 cri.go:89] found id: ""
	I1218 00:39:27.104747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.104754 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:27.104759 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:27.104826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:27.134528 1311248 cri.go:89] found id: ""
	I1218 00:39:27.134543 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.134551 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:27.134556 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:27.134618 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:27.160290 1311248 cri.go:89] found id: ""
	I1218 00:39:27.160304 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.160311 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:27.160316 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:27.160374 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:27.187607 1311248 cri.go:89] found id: ""
	I1218 00:39:27.187621 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.187628 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:27.187634 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:27.187691 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:27.214602 1311248 cri.go:89] found id: ""
	I1218 00:39:27.214616 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.214623 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:27.214630 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:27.214690 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:27.239452 1311248 cri.go:89] found id: ""
	I1218 00:39:27.239466 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.239474 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:27.239479 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:27.239538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:27.268209 1311248 cri.go:89] found id: ""
	I1218 00:39:27.268232 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.268240 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:27.268248 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:27.268259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:27.283007 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:27.283033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:27.351624 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:27.351634 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:27.351644 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:27.414794 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:27.414814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.449027 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:27.449042 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.008353 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:30.051512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:30.051599 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:30.142207 1311248 cri.go:89] found id: ""
	I1218 00:39:30.142226 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.142234 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:30.142241 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:30.142317 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:30.175952 1311248 cri.go:89] found id: ""
	I1218 00:39:30.175967 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.175979 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:30.175985 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:30.176054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:30.202613 1311248 cri.go:89] found id: ""
	I1218 00:39:30.202640 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.202649 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:30.202655 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:30.202718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:30.229638 1311248 cri.go:89] found id: ""
	I1218 00:39:30.229653 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.229661 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:30.229666 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:30.229728 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:30.261192 1311248 cri.go:89] found id: ""
	I1218 00:39:30.261206 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.261214 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:30.261220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:30.261285 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:30.288158 1311248 cri.go:89] found id: ""
	I1218 00:39:30.288173 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.288180 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:30.288189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:30.288251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:30.314418 1311248 cri.go:89] found id: ""
	I1218 00:39:30.314432 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.314441 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:30.314450 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:30.314462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.369830 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:30.369849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:30.385018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:30.385037 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:30.467908 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:30.467920 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:30.467930 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:30.529075 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:30.529095 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:33.059241 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:33.070119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:33.070182 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:33.095716 1311248 cri.go:89] found id: ""
	I1218 00:39:33.095730 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.095738 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:33.095744 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:33.095804 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:33.121681 1311248 cri.go:89] found id: ""
	I1218 00:39:33.121697 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.121711 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:33.121717 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:33.121783 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:33.147424 1311248 cri.go:89] found id: ""
	I1218 00:39:33.147438 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.147445 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:33.147451 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:33.147514 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:33.173916 1311248 cri.go:89] found id: ""
	I1218 00:39:33.173931 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.173938 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:33.173943 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:33.174004 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:33.199675 1311248 cri.go:89] found id: ""
	I1218 00:39:33.199690 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.199697 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:33.199702 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:33.199761 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:33.229684 1311248 cri.go:89] found id: ""
	I1218 00:39:33.229698 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.229706 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:33.229711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:33.229771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:33.255931 1311248 cri.go:89] found id: ""
	I1218 00:39:33.255955 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.255963 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:33.255971 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:33.255981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:33.312520 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:33.312538 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:33.327008 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:33.327024 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:33.392853 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:33.392863 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:33.392873 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:33.462852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:33.462872 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:35.991111 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:36.001578 1311248 kubeadm.go:602] duration metric: took 4m4.636770246s to restartPrimaryControlPlane
	W1218 00:39:36.001631 1311248 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1218 00:39:36.001712 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:39:36.428039 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:39:36.441875 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:39:36.449799 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:39:36.449855 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:39:36.457535 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:39:36.457543 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:39:36.457593 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:39:36.465339 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:39:36.465393 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:39:36.472406 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:39:36.480110 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:39:36.480163 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:39:36.487432 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.494964 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:39:36.495019 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.502375 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:39:36.509914 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:39:36.509976 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:39:36.517325 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:39:36.642706 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:39:36.643096 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:39:36.709498 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:43:38.241451 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 00:43:38.241477 1311248 kubeadm.go:319] 
	I1218 00:43:38.241546 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:43:38.245587 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.245639 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.245728 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.245779 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.245813 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.245856 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.245904 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.245947 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.246021 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.246074 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.246124 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.246169 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.246253 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.246316 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.246394 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.246489 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.246578 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.246661 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.249668 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.249761 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.249825 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.249900 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.249985 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.250056 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.250107 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.250167 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.250231 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.250306 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.250386 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.250429 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.250494 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:38.250547 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:38.250611 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:38.250669 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:38.250731 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:38.250784 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:38.250896 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:38.250969 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:38.255653 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:38.255752 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:38.255840 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:38.255905 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:38.256008 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:38.256128 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:38.256248 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:38.256329 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:38.256365 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:38.256499 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:38.256681 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:43:38.256752 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000096267s
	I1218 00:43:38.256755 1311248 kubeadm.go:319] 
	I1218 00:43:38.256814 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:43:38.256853 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:43:38.256963 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:43:38.256967 1311248 kubeadm.go:319] 
	I1218 00:43:38.257093 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:43:38.257126 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:43:38.257155 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:43:38.257212 1311248 kubeadm.go:319] 
	W1218 00:43:38.257278 1311248 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000096267s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 00:43:38.257393 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:43:38.672580 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:43:38.686195 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:43:38.686247 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:43:38.694107 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:43:38.694119 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:43:38.694170 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:43:38.702289 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:43:38.702343 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:43:38.710380 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:43:38.718160 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:43:38.718218 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:43:38.726244 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.734209 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:43:38.734268 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.741907 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:43:38.749716 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:43:38.749773 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:43:38.757471 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:43:38.797919 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.797966 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.877731 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.877795 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.877835 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.877879 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.877926 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.877972 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.878019 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.878065 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.878112 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.878155 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.878202 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.878247 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.941330 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.941446 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.941535 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.951935 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.957317 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.957410 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.957474 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.957580 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.957646 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.957723 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.957784 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.957852 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.957913 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.957987 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.958059 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.958095 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.958151 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:39.202920 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:39.377892 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:39.964483 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:40.103558 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:40.457630 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:40.458383 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:40.462089 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:40.465489 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:40.465583 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:40.465654 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:40.465716 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:40.486385 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:40.486497 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:40.494535 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:40.494848 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:40.495030 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:40.625355 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:40.625497 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:47:40.625149 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000298437s
	I1218 00:47:40.625174 1311248 kubeadm.go:319] 
	I1218 00:47:40.625227 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:47:40.625262 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:47:40.625362 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:47:40.625367 1311248 kubeadm.go:319] 
	I1218 00:47:40.625481 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:47:40.625513 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:47:40.625550 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:47:40.625553 1311248 kubeadm.go:319] 
	I1218 00:47:40.629455 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:47:40.629954 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:47:40.630083 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:47:40.630316 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 00:47:40.630321 1311248 kubeadm.go:319] 
	I1218 00:47:40.630384 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:47:40.630455 1311248 kubeadm.go:403] duration metric: took 12m9.299018648s to StartCluster
	I1218 00:47:40.630487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:47:40.630549 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:47:40.655474 1311248 cri.go:89] found id: ""
	I1218 00:47:40.655489 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.655497 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:47:40.655502 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:47:40.655558 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:47:40.681677 1311248 cri.go:89] found id: ""
	I1218 00:47:40.681692 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.681699 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:47:40.681705 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:47:40.681772 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:47:40.714293 1311248 cri.go:89] found id: ""
	I1218 00:47:40.714307 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.714314 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:47:40.714319 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:47:40.714379 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:47:40.739065 1311248 cri.go:89] found id: ""
	I1218 00:47:40.739089 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.739097 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:47:40.739102 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:47:40.739168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:47:40.763653 1311248 cri.go:89] found id: ""
	I1218 00:47:40.763666 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.763673 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:47:40.763678 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:47:40.763737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:47:40.789038 1311248 cri.go:89] found id: ""
	I1218 00:47:40.789052 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.789059 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:47:40.789065 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:47:40.789124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:47:40.817866 1311248 cri.go:89] found id: ""
	I1218 00:47:40.817880 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.817887 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:47:40.817895 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:47:40.817905 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:47:40.877071 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:47:40.877090 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:47:40.891818 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:47:40.891835 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:47:40.956585 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:47:40.956595 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:47:40.956605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:47:41.023372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:47:41.023390 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 00:47:41.051126 1311248 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 00:47:41.051157 1311248 out.go:285] * 
	W1218 00:47:41.051213 1311248 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.051229 1311248 out.go:285] * 
	W1218 00:47:41.053388 1311248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:47:41.058223 1311248 out.go:203] 
	W1218 00:47:41.061890 1311248 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.061936 1311248 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 00:47:41.061956 1311248 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 00:47:41.065091 1311248 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.480115132Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.479679470Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.482375935Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.484746123Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.493400844Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.832040692Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.834441140Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.842565463Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.843007052Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.134966568Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.137526298Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.142612413Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.150391104Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.447523093Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.449756341Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.461849843Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.462352304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.465606883Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.468013616Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.471019652Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.479506099Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.295881886Z" level=info msg="No images store for sha256:fbee3dfdb946545a8487e59f5adaf8b308b880e0a9660068998d6d7ea3033fed"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.298353921Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307420645Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307912686Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:49:31.899856   23049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:31.900558   23049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:31.902122   23049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:31.902711   23049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:49:31.904204   23049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:49:31 up  7:31,  0 user,  load average: 0.49, 0.37, 0.46
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:49:28 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:29 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 18 00:49:29 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:29 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:29 functional-232602 kubelet[22935]: E1218 00:49:29.183868   22935 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:29 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:29 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:29 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 465.
	Dec 18 00:49:29 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:29 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:29 functional-232602 kubelet[22940]: E1218 00:49:29.940688   22940 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:29 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:29 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:30 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 466.
	Dec 18 00:49:30 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:30 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:30 functional-232602 kubelet[22945]: E1218 00:49:30.690780   22945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:30 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:30 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:49:31 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 467.
	Dec 18 00:49:31 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:31 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:49:31 functional-232602 kubelet[22966]: E1218 00:49:31.457411   22966 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:49:31 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:49:31 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (378.513149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (242s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1218 00:48:07.527712 1261148 retry.go:31] will retry after 2.958314214s: Temporary Error: Get "http://10.106.246.222": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1218 00:48:20.487556 1261148 retry.go:31] will retry after 5.133997662s: Temporary Error: Get "http://10.106.246.222": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1218 00:48:25.214389 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1218 00:48:35.622349 1261148 retry.go:31] will retry after 7.965243309s: Temporary Error: Get "http://10.106.246.222": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1218 00:48:53.588690 1261148 retry.go:31] will retry after 7.746057169s: Temporary Error: Get "http://10.106.246.222": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1218 00:49:11.336544 1261148 retry.go:31] will retry after 8.743452404s: Temporary Error: Get "http://10.106.246.222": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1218 00:50:04.395549 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1218 00:51:28.294119 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (611.034121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (339.630532ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-232602 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh            │ functional-232602 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh            │ functional-232602 ssh -- ls -la /mount-9p                                                                                                         │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh            │ functional-232602 ssh sudo umount -f /mount-9p                                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ mount          │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount1 --alsologtostderr -v=1              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh            │ functional-232602 ssh findmnt -T /mount1                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ mount          │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount2 --alsologtostderr -v=1              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ mount          │ -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount3 --alsologtostderr -v=1              │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ ssh            │ functional-232602 ssh findmnt -T /mount2                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh            │ functional-232602 ssh findmnt -T /mount3                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ mount          │ -p functional-232602 --kill=true                                                                                                                  │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ start          │ -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ start          │ -p functional-232602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ start          │ -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-232602 --alsologtostderr -v=1                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ update-context │ functional-232602 update-context --alsologtostderr -v=2                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ update-context │ functional-232602 update-context --alsologtostderr -v=2                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ update-context │ functional-232602 update-context --alsologtostderr -v=2                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ image          │ functional-232602 image ls --format short --alsologtostderr                                                                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ image          │ functional-232602 image ls --format yaml --alsologtostderr                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ ssh            │ functional-232602 ssh pgrep buildkitd                                                                                                             │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │                     │
	│ image          │ functional-232602 image build -t localhost/my-image:functional-232602 testdata/build --alsologtostderr                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ image          │ functional-232602 image ls                                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ image          │ functional-232602 image ls --format json --alsologtostderr                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	│ image          │ functional-232602 image ls --format table --alsologtostderr                                                                                       │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:49 UTC │ 18 Dec 25 00:49 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:49:43
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:49:43.724650 1329993 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:49:43.724800 1329993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.724829 1329993 out.go:374] Setting ErrFile to fd 2...
	I1218 00:49:43.724835 1329993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.725246 1329993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:49:43.725655 1329993 out.go:368] Setting JSON to false
	I1218 00:49:43.726537 1329993 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27130,"bootTime":1765991854,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:49:43.726603 1329993 start.go:143] virtualization:  
	I1218 00:49:43.729825 1329993 out.go:179] * [functional-232602] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1218 00:49:43.732853 1329993 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:49:43.732977 1329993 notify.go:221] Checking for updates...
	I1218 00:49:43.738587 1329993 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:49:43.741453 1329993 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:49:43.744301 1329993 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:49:43.747141 1329993 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:49:43.749958 1329993 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:49:43.753490 1329993 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:49:43.754156 1329993 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:49:43.785304 1329993 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:49:43.785430 1329993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:49:43.841100 1329993 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:49:43.829277142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:49:43.841205 1329993 docker.go:319] overlay module found
	I1218 00:49:43.844333 1329993 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1218 00:49:43.847146 1329993 start.go:309] selected driver: docker
	I1218 00:49:43.847177 1329993 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:49:43.847299 1329993 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:49:43.851013 1329993 out.go:203] 
	W1218 00:49:43.853978 1329993 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 00:49:43.856960 1329993 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.842565463Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:51 functional-232602 containerd[9652]: time="2025-12-18T00:47:51.843007052Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.134966568Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.137526298Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.142612413Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.150391104Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.447523093Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.449756341Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.461849843Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:53 functional-232602 containerd[9652]: time="2025-12-18T00:47:53.462352304Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.465606883Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.468013616Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.471019652Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 18 00:47:54 functional-232602 containerd[9652]: time="2025-12-18T00:47:54.479506099Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-232602\" returns successfully"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.295881886Z" level=info msg="No images store for sha256:fbee3dfdb946545a8487e59f5adaf8b308b880e0a9660068998d6d7ea3033fed"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.298353921Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307420645Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:55 functional-232602 containerd[9652]: time="2025-12-18T00:47:55.307912686Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:49:49 functional-232602 containerd[9652]: time="2025-12-18T00:49:49.979433963Z" level=info msg="connecting to shim hcis29roovz54d4xneoxvzno5" address="unix:///run/containerd/s/51e02e4c764a636aa606cd8ae9b53a00bc028c71611cde73dfd98e6713f38dfa" namespace=k8s.io protocol=ttrpc version=3
	Dec 18 00:49:50 functional-232602 containerd[9652]: time="2025-12-18T00:49:50.071156818Z" level=info msg="shim disconnected" id=hcis29roovz54d4xneoxvzno5 namespace=k8s.io
	Dec 18 00:49:50 functional-232602 containerd[9652]: time="2025-12-18T00:49:50.071207689Z" level=info msg="cleaning up after shim disconnected" id=hcis29roovz54d4xneoxvzno5 namespace=k8s.io
	Dec 18 00:49:50 functional-232602 containerd[9652]: time="2025-12-18T00:49:50.071218068Z" level=info msg="cleaning up dead shim" id=hcis29roovz54d4xneoxvzno5 namespace=k8s.io
	Dec 18 00:49:50 functional-232602 containerd[9652]: time="2025-12-18T00:49:50.333364715Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-232602\""
	Dec 18 00:49:50 functional-232602 containerd[9652]: time="2025-12-18T00:49:50.340950298Z" level=info msg="ImageCreate event name:\"sha256:164768fa125038411f5912f105fa73c4a3ff6109d752a9662986211a7beebf0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:49:50 functional-232602 containerd[9652]: time="2025-12-18T00:49:50.341310839Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:52:01.552352   25157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:52:01.553064   25157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:52:01.554767   25157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:52:01.555435   25157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:52:01.557058   25157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:52:01 up  7:34,  0 user,  load average: 0.38, 0.33, 0.43
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:51:58 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:51:59 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 664.
	Dec 18 00:51:59 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:51:59 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:51:59 functional-232602 kubelet[25025]: E1218 00:51:59.188271   25025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:51:59 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:51:59 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:51:59 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 665.
	Dec 18 00:51:59 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:51:59 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:51:59 functional-232602 kubelet[25031]: E1218 00:51:59.933713   25031 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:51:59 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:51:59 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:52:00 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 666.
	Dec 18 00:52:00 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:52:00 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:52:00 functional-232602 kubelet[25052]: E1218 00:52:00.713536   25052 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:52:00 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:52:00 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:52:01 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 667.
	Dec 18 00:52:01 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:52:01 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:52:01 functional-232602 kubelet[25131]: E1218 00:52:01.462986   25131 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:52:01 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:52:01 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (318.755033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (242.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (3.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-232602 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-232602 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (87.733543ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-232602 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	        "Created": "2025-12-18T00:20:52.193636538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1300116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T00:20:52.255390589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
	        "LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
	        "Name": "/functional-232602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-232602:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-232602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
	                "LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-232602",
	                "Source": "/var/lib/docker/volumes/functional-232602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-232602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-232602",
	                "name.minikube.sigs.k8s.io": "functional-232602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
	            "SandboxKey": "/var/run/docker/netns/e580e3c37349",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33905"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-232602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b2:23:bb:20:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
	                    "EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-232602",
	                        "99b81787dd55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 2 (430.490085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 logs -n 25: (1.370084459s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-232602 ssh sudo crictl images                                                                                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ cache   │ functional-232602 cache reload                                                                                                                               │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ ssh     │ functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │ 18 Dec 25 00:35 UTC │
	│ kubectl │ functional-232602 kubectl -- --context functional-232602 get pods                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ start   │ -p functional-232602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:35 UTC │                     │
	│ cp      │ functional-232602 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ config  │ functional-232602 config unset cpus                                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ config  │ functional-232602 config get cpus                                                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ config  │ functional-232602 config set cpus 2                                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ config  │ functional-232602 config get cpus                                                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ config  │ functional-232602 config unset cpus                                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ config  │ functional-232602 config get cpus                                                                                                                            │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh -n functional-232602 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ cp      │ functional-232602 cp functional-232602:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3387164111/001/cp-test.txt │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ ssh     │ functional-232602 ssh sudo systemctl is-active docker                                                                                                        │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh sudo systemctl is-active crio                                                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh -n functional-232602 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ cp      │ functional-232602 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	│ image   │ functional-232602 image load --daemon kicbase/echo-server:functional-232602 --alsologtostderr                                                                │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │                     │
	│ ssh     │ functional-232602 ssh -n functional-232602 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:47 UTC │ 18 Dec 25 00:47 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:35:27.044902 1311248 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:35:27.045002 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045006 1311248 out.go:374] Setting ErrFile to fd 2...
	I1218 00:35:27.045010 1311248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:35:27.045249 1311248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:35:27.045606 1311248 out.go:368] Setting JSON to false
	I1218 00:35:27.046406 1311248 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26273,"bootTime":1765991854,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:35:27.046458 1311248 start.go:143] virtualization:  
	I1218 00:35:27.049930 1311248 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:35:27.052925 1311248 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:35:27.053012 1311248 notify.go:221] Checking for updates...
	I1218 00:35:27.058856 1311248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:35:27.061872 1311248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:35:27.064792 1311248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:35:27.067743 1311248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:35:27.070676 1311248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:35:27.074096 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:27.074190 1311248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:35:27.106641 1311248 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:35:27.106748 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.164302 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.154715728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.164392 1311248 docker.go:319] overlay module found
	I1218 00:35:27.167427 1311248 out.go:179] * Using the docker driver based on existing profile
	I1218 00:35:27.170281 1311248 start.go:309] selected driver: docker
	I1218 00:35:27.170292 1311248 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.170444 1311248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:35:27.170546 1311248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:35:27.230048 1311248 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-18 00:35:27.221277832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:35:27.230469 1311248 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 00:35:27.230491 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:27.230542 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:27.230580 1311248 start.go:353] cluster config:
	{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:27.235511 1311248 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
	I1218 00:35:27.238271 1311248 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:35:27.241192 1311248 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:35:27.243943 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:27.243991 1311248 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 00:35:27.243999 1311248 cache.go:65] Caching tarball of preloaded images
	I1218 00:35:27.244040 1311248 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:35:27.244087 1311248 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 00:35:27.244096 1311248 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 00:35:27.244211 1311248 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
	I1218 00:35:27.263574 1311248 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 00:35:27.263584 1311248 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 00:35:27.263598 1311248 cache.go:243] Successfully downloaded all kic artifacts
	I1218 00:35:27.263628 1311248 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 00:35:27.263679 1311248 start.go:364] duration metric: took 35.445µs to acquireMachinesLock for "functional-232602"
	I1218 00:35:27.263697 1311248 start.go:96] Skipping create...Using existing machine configuration
	I1218 00:35:27.263701 1311248 fix.go:54] fixHost starting: 
	I1218 00:35:27.263946 1311248 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
	I1218 00:35:27.280222 1311248 fix.go:112] recreateIfNeeded on functional-232602: state=Running err=<nil>
	W1218 00:35:27.280243 1311248 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 00:35:27.283327 1311248 out.go:252] * Updating the running docker "functional-232602" container ...
	I1218 00:35:27.283352 1311248 machine.go:94] provisionDockerMachine start ...
	I1218 00:35:27.283428 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.299920 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.300231 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.300238 1311248 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 00:35:27.452356 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.452370 1311248 ubuntu.go:182] provisioning hostname "functional-232602"
	I1218 00:35:27.452432 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.473471 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.473816 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.473825 1311248 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
	I1218 00:35:27.640067 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
	
	I1218 00:35:27.640142 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:27.667013 1311248 main.go:143] libmachine: Using SSH client type: native
	I1218 00:35:27.667323 1311248 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I1218 00:35:27.667342 1311248 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 00:35:27.820945 1311248 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 00:35:27.820961 1311248 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 00:35:27.820980 1311248 ubuntu.go:190] setting up certificates
	I1218 00:35:27.820989 1311248 provision.go:84] configureAuth start
	I1218 00:35:27.821051 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:27.838852 1311248 provision.go:143] copyHostCerts
	I1218 00:35:27.838916 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 00:35:27.838924 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 00:35:27.838994 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 00:35:27.839097 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 00:35:27.839100 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 00:35:27.839128 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 00:35:27.839186 1311248 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 00:35:27.839190 1311248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 00:35:27.839213 1311248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 00:35:27.839265 1311248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
	I1218 00:35:28.109890 1311248 provision.go:177] copyRemoteCerts
	I1218 00:35:28.109947 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 00:35:28.109996 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.127232 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.232344 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 00:35:28.250086 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 00:35:28.268448 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 00:35:28.286339 1311248 provision.go:87] duration metric: took 465.326862ms to configureAuth
	I1218 00:35:28.286357 1311248 ubuntu.go:206] setting minikube options for container-runtime
	I1218 00:35:28.286550 1311248 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:35:28.286556 1311248 machine.go:97] duration metric: took 1.003199883s to provisionDockerMachine
	I1218 00:35:28.286562 1311248 start.go:293] postStartSetup for "functional-232602" (driver="docker")
	I1218 00:35:28.286572 1311248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 00:35:28.286620 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 00:35:28.286663 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.304025 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.412869 1311248 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 00:35:28.416834 1311248 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 00:35:28.416854 1311248 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 00:35:28.416865 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 00:35:28.416921 1311248 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 00:35:28.417025 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 00:35:28.417099 1311248 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
	I1218 00:35:28.417168 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
	I1218 00:35:28.424798 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:28.442733 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
	I1218 00:35:28.462911 1311248 start.go:296] duration metric: took 176.334186ms for postStartSetup
	I1218 00:35:28.462983 1311248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:35:28.463039 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.480489 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.585769 1311248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 00:35:28.590837 1311248 fix.go:56] duration metric: took 1.327128154s for fixHost
	I1218 00:35:28.590854 1311248 start.go:83] releasing machines lock for "functional-232602", held for 1.327167711s
	I1218 00:35:28.590944 1311248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
	I1218 00:35:28.607738 1311248 ssh_runner.go:195] Run: cat /version.json
	I1218 00:35:28.607789 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.608049 1311248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 00:35:28.608095 1311248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
	I1218 00:35:28.626689 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.634380 1311248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
	I1218 00:35:28.732432 1311248 ssh_runner.go:195] Run: systemctl --version
	I1218 00:35:28.823477 1311248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 00:35:28.828399 1311248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 00:35:28.828467 1311248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 00:35:28.836277 1311248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 00:35:28.836291 1311248 start.go:496] detecting cgroup driver to use...
	I1218 00:35:28.836322 1311248 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 00:35:28.836377 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 00:35:28.852038 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 00:35:28.865568 1311248 docker.go:218] disabling cri-docker service (if available) ...
	I1218 00:35:28.865634 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 00:35:28.881324 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 00:35:28.894482 1311248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 00:35:29.019814 1311248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 00:35:29.139455 1311248 docker.go:234] disabling docker service ...
	I1218 00:35:29.139511 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 00:35:29.157302 1311248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 00:35:29.172520 1311248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 00:35:29.290798 1311248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 00:35:29.409846 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 00:35:29.423039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 00:35:29.438313 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 00:35:29.447458 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 00:35:29.457161 1311248 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 00:35:29.457221 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 00:35:29.466703 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.475761 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 00:35:29.484925 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 00:35:29.493811 1311248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 00:35:29.502125 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 00:35:29.511205 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 00:35:29.520548 1311248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 00:35:29.530343 1311248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 00:35:29.538157 1311248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 00:35:29.545765 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:29.664409 1311248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 00:35:29.789454 1311248 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 00:35:29.789537 1311248 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 00:35:29.793414 1311248 start.go:564] Will wait 60s for crictl version
	I1218 00:35:29.793467 1311248 ssh_runner.go:195] Run: which crictl
	I1218 00:35:29.796922 1311248 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 00:35:29.821478 1311248 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 00:35:29.821534 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.845973 1311248 ssh_runner.go:195] Run: containerd --version
	I1218 00:35:29.874969 1311248 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 00:35:29.877886 1311248 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 00:35:29.897397 1311248 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 00:35:29.909164 1311248 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1218 00:35:29.912023 1311248 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 00:35:29.912156 1311248 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 00:35:29.912246 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.959601 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.959615 1311248 containerd.go:534] Images already preloaded, skipping extraction
	I1218 00:35:29.959670 1311248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 00:35:29.987018 1311248 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 00:35:29.987029 1311248 cache_images.go:86] Images are preloaded, skipping loading
	I1218 00:35:29.987035 1311248 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
	I1218 00:35:29.987151 1311248 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 00:35:29.987219 1311248 ssh_runner.go:195] Run: sudo crictl info
	I1218 00:35:30.033188 1311248 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1218 00:35:30.033262 1311248 cni.go:84] Creating CNI manager for ""
	I1218 00:35:30.033272 1311248 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:35:30.033285 1311248 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 00:35:30.033322 1311248 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 00:35:30.033459 1311248 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-232602"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 00:35:30.033555 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 00:35:30.044133 1311248 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 00:35:30.044224 1311248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 00:35:30.053566 1311248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 00:35:30.069600 1311248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 00:35:30.086185 1311248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1218 00:35:30.100953 1311248 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 00:35:30.105204 1311248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 00:35:30.229133 1311248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 00:35:30.643842 1311248 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
	I1218 00:35:30.643853 1311248 certs.go:195] generating shared ca certs ...
	I1218 00:35:30.643868 1311248 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:35:30.644040 1311248 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 00:35:30.644079 1311248 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 00:35:30.644085 1311248 certs.go:257] generating profile certs ...
	I1218 00:35:30.644187 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
	I1218 00:35:30.644248 1311248 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
	I1218 00:35:30.644287 1311248 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
	I1218 00:35:30.644391 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 00:35:30.644420 1311248 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 00:35:30.644426 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 00:35:30.644455 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 00:35:30.644481 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 00:35:30.644512 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 00:35:30.644557 1311248 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 00:35:30.645271 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 00:35:30.667963 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 00:35:30.688789 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 00:35:30.707638 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 00:35:30.727172 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 00:35:30.745582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 00:35:30.763537 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 00:35:30.781521 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 00:35:30.799255 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 00:35:30.816582 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 00:35:30.835230 1311248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 00:35:30.852513 1311248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 00:35:30.865555 1311248 ssh_runner.go:195] Run: openssl version
	I1218 00:35:30.871911 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.879397 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 00:35:30.886681 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890109 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.890169 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 00:35:30.930894 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 00:35:30.938142 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.945286 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 00:35:30.952538 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956151 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.956245 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 00:35:30.997157 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 00:35:31.005056 1311248 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.014006 1311248 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 00:35:31.022034 1311248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025894 1311248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.025961 1311248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 00:35:31.067200 1311248 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 00:35:31.075278 1311248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 00:35:31.079306 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 00:35:31.123391 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 00:35:31.165879 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 00:35:31.208281 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 00:35:31.249146 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 00:35:31.290212 1311248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 00:35:31.331444 1311248 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:35:31.331522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 00:35:31.331580 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.356945 1311248 cri.go:89] found id: ""
	I1218 00:35:31.357003 1311248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 00:35:31.364788 1311248 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 00:35:31.364798 1311248 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 00:35:31.364876 1311248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 00:35:31.372428 1311248 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.372951 1311248 kubeconfig.go:125] found "functional-232602" server: "https://192.168.49.2:8441"
	I1218 00:35:31.374199 1311248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 00:35:31.382218 1311248 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-18 00:20:57.479200490 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-18 00:35:30.095938034 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1218 00:35:31.382230 1311248 kubeadm.go:1161] stopping kube-system containers ...
	I1218 00:35:31.382240 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1218 00:35:31.382293 1311248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 00:35:31.418635 1311248 cri.go:89] found id: ""
	I1218 00:35:31.418695 1311248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 00:35:31.437319 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:35:31.447695 1311248 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 18 00:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 18 00:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 18 00:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 18 00:25 /etc/kubernetes/scheduler.conf
	
	I1218 00:35:31.447757 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:35:31.455511 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:35:31.463139 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.463194 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:35:31.470550 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.478132 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.478200 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:35:31.485959 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:35:31.493702 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 00:35:31.493757 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:35:31.501195 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:35:31.509596 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:31.563212 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:32.882945 1311248 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.319707666s)
	I1218 00:35:32.883005 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.109967 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.178681 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 00:35:33.229970 1311248 api_server.go:52] waiting for apiserver process to appear ...
	I1218 00:35:33.230040 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:33.730927 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.230378 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:34.730284 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.230343 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:35.730919 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:36.730993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.230539 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:37.731124 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.230838 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:38.730863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.230678 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:39.730230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.230236 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:40.731068 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.231109 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:41.730288 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.230203 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:42.730234 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.230141 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:43.730185 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.231143 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:44.730804 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.237230 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:45.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.230803 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:46.730882 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.230533 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:47.731147 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.230905 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:48.730814 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.230754 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:49.730337 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.230375 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:50.731190 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.230987 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:51.731023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.230495 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:52.730322 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.230929 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:53.730922 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.231058 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:54.730458 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.230148 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:55.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.230494 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:56.731136 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.231080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:57.730219 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.230880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:58.730261 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.230265 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:35:59.730444 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.230228 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:00.730965 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.231030 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:01.730793 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.231094 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:02.730432 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.230277 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:03.730969 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.230206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:04.731080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.230777 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:05.730718 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.231042 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:06.730199 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.230478 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:07.730807 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.230613 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:08.730187 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.231163 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:09.731095 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.231010 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:10.731081 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.230167 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:11.730331 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.230144 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:12.730362 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.230993 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:13.730893 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.230791 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:14.731035 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.230946 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:15.730274 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.230238 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:16.730202 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.231089 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:17.730821 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.230480 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:18.730348 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.230188 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:19.730212 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.230315 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:20.730113 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.231120 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:21.730951 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.230491 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:22.730452 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.230231 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:23.730205 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.230525 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:24.730779 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.230233 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:25.731067 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.231079 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:26.730956 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.230990 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:27.730196 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.230863 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:28.730884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.230380 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:29.730826 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.230239 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:30.731192 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.230615 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:31.730900 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.230553 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:32.730134 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:33.230238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:33.230314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:33.258458 1311248 cri.go:89] found id: ""
	I1218 00:36:33.258472 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.258484 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:33.258490 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:33.258562 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:33.283965 1311248 cri.go:89] found id: ""
	I1218 00:36:33.283979 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.283986 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:33.283991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:33.284048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:33.308663 1311248 cri.go:89] found id: ""
	I1218 00:36:33.308678 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.308693 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:33.308699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:33.308760 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:33.337762 1311248 cri.go:89] found id: ""
	I1218 00:36:33.337775 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.337783 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:33.337788 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:33.337852 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:33.366489 1311248 cri.go:89] found id: ""
	I1218 00:36:33.366503 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.366510 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:33.366515 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:33.366574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:33.401983 1311248 cri.go:89] found id: ""
	I1218 00:36:33.401998 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.402005 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:33.402010 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:33.402067 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:33.436853 1311248 cri.go:89] found id: ""
	I1218 00:36:33.436867 1311248 logs.go:282] 0 containers: []
	W1218 00:36:33.436874 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:33.436883 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:33.436893 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:33.504087 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:33.495884   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.496404   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.497913   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.498238   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:33.499758   10708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:33.504097 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:33.504107 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:33.570523 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:33.570549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:33.607484 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:33.607500 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:33.664867 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:33.664884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.181388 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:36.191464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:36.191521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:36.214848 1311248 cri.go:89] found id: ""
	I1218 00:36:36.214863 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.214870 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:36.214876 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:36.214933 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:36.241311 1311248 cri.go:89] found id: ""
	I1218 00:36:36.241324 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.241331 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:36.241336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:36.241394 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:36.265257 1311248 cri.go:89] found id: ""
	I1218 00:36:36.265271 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.265279 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:36.265284 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:36.265343 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:36.288492 1311248 cri.go:89] found id: ""
	I1218 00:36:36.288506 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.288513 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:36.288518 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:36.288574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:36.316558 1311248 cri.go:89] found id: ""
	I1218 00:36:36.316573 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.316580 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:36.316585 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:36.316664 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:36.341952 1311248 cri.go:89] found id: ""
	I1218 00:36:36.341966 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.341973 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:36.341979 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:36.342037 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:36.365945 1311248 cri.go:89] found id: ""
	I1218 00:36:36.365959 1311248 logs.go:282] 0 containers: []
	W1218 00:36:36.365966 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:36.365974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:36.365983 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:36.426123 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:36.426142 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:36.444123 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:36.444140 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:36.509193 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:36.500571   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.501248   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.502991   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.503669   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:36.505155   10821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:36.509204 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:36.509214 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:36.571649 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:36.571667 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.103696 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:39.113703 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:39.113762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:39.141856 1311248 cri.go:89] found id: ""
	I1218 00:36:39.141870 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.141878 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:39.141883 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:39.141944 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:39.170038 1311248 cri.go:89] found id: ""
	I1218 00:36:39.170052 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.170101 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:39.170107 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:39.170172 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:39.199014 1311248 cri.go:89] found id: ""
	I1218 00:36:39.199028 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.199035 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:39.199041 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:39.199101 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:39.226392 1311248 cri.go:89] found id: ""
	I1218 00:36:39.226414 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.226422 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:39.226427 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:39.226493 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:39.251905 1311248 cri.go:89] found id: ""
	I1218 00:36:39.251920 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.251927 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:39.251932 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:39.251992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:39.276915 1311248 cri.go:89] found id: ""
	I1218 00:36:39.276937 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.276944 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:39.276949 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:39.277007 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:39.301520 1311248 cri.go:89] found id: ""
	I1218 00:36:39.301534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:39.301542 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:39.301551 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:39.301560 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:39.364240 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:39.364259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:39.394082 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:39.394098 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:39.460886 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:39.460907 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:39.477258 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:39.477273 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:39.547172 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:39.535504   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.536233   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541109   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.541738   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:39.543242   10938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.048213 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:42.059442 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:42.059521 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:42.095887 1311248 cri.go:89] found id: ""
	I1218 00:36:42.095903 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.095911 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:42.095917 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:42.095987 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:42.126738 1311248 cri.go:89] found id: ""
	I1218 00:36:42.126756 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.126763 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:42.126769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:42.126846 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:42.183895 1311248 cri.go:89] found id: ""
	I1218 00:36:42.183916 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.183924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:42.183931 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:42.184005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:42.217296 1311248 cri.go:89] found id: ""
	I1218 00:36:42.217313 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.217320 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:42.217333 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:42.217410 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:42.248021 1311248 cri.go:89] found id: ""
	I1218 00:36:42.248038 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.248065 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:42.248071 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:42.248143 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:42.278624 1311248 cri.go:89] found id: ""
	I1218 00:36:42.278650 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.278658 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:42.278664 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:42.278732 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:42.306575 1311248 cri.go:89] found id: ""
	I1218 00:36:42.306589 1311248 logs.go:282] 0 containers: []
	W1218 00:36:42.306604 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:42.306613 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:42.306622 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:42.366835 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:42.366859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:42.381793 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:42.381810 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:42.478588 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:42.470353   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.470899   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.472512   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.473123   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:42.474698   11023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:42.478598 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:42.478608 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:42.541093 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:42.541114 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:45.069751 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:45.106091 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:45.106161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:45.152078 1311248 cri.go:89] found id: ""
	I1218 00:36:45.152105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.152113 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:45.152120 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:45.152202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:45.228849 1311248 cri.go:89] found id: ""
	I1218 00:36:45.228866 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.228874 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:45.228881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:45.229017 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:45.284605 1311248 cri.go:89] found id: ""
	I1218 00:36:45.284640 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.284648 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:45.284654 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:45.284773 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:45.318439 1311248 cri.go:89] found id: ""
	I1218 00:36:45.318454 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.318461 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:45.318467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:45.318532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:45.348962 1311248 cri.go:89] found id: ""
	I1218 00:36:45.348976 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.348984 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:45.348990 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:45.349055 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:45.378098 1311248 cri.go:89] found id: ""
	I1218 00:36:45.378112 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.378119 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:45.378125 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:45.378227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:45.435291 1311248 cri.go:89] found id: ""
	I1218 00:36:45.435311 1311248 logs.go:282] 0 containers: []
	W1218 00:36:45.435318 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:45.435335 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:45.435362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:45.505552 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:45.505571 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:45.523778 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:45.523794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:45.592584 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:45.584713   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.585204   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.586780   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.587182   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:45.588708   11131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:45.592594 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:45.592606 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:45.658999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:45.659018 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:48.186749 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:48.197169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:48.197230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:48.222369 1311248 cri.go:89] found id: ""
	I1218 00:36:48.222383 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.222390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:48.222396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:48.222459 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:48.247132 1311248 cri.go:89] found id: ""
	I1218 00:36:48.247146 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.247153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:48.247158 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:48.247217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:48.272441 1311248 cri.go:89] found id: ""
	I1218 00:36:48.272455 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.272462 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:48.272467 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:48.272526 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:48.302640 1311248 cri.go:89] found id: ""
	I1218 00:36:48.302655 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.302662 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:48.302679 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:48.302737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:48.329411 1311248 cri.go:89] found id: ""
	I1218 00:36:48.329425 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.329433 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:48.329438 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:48.329497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:48.358419 1311248 cri.go:89] found id: ""
	I1218 00:36:48.358433 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.358440 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:48.358445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:48.358503 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:48.383182 1311248 cri.go:89] found id: ""
	I1218 00:36:48.383195 1311248 logs.go:282] 0 containers: []
	W1218 00:36:48.383203 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:48.383210 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:48.383220 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:48.451796 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:48.451815 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:48.467080 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:48.467096 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:48.533083 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:48.524386   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.525232   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.526917   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.527532   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:48.529214   11237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:48.533092 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:48.533103 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:48.596920 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:48.596940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:51.124756 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:51.135594 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:51.135659 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:51.164133 1311248 cri.go:89] found id: ""
	I1218 00:36:51.164148 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.164156 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:51.164161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:51.164226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:51.190200 1311248 cri.go:89] found id: ""
	I1218 00:36:51.190215 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.190222 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:51.190228 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:51.190291 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:51.216170 1311248 cri.go:89] found id: ""
	I1218 00:36:51.216187 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.216194 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:51.216200 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:51.216263 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:51.246031 1311248 cri.go:89] found id: ""
	I1218 00:36:51.246045 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.246052 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:51.246058 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:51.246122 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:51.278864 1311248 cri.go:89] found id: ""
	I1218 00:36:51.278878 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.278885 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:51.278890 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:51.278963 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:51.303118 1311248 cri.go:89] found id: ""
	I1218 00:36:51.303132 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.303139 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:51.303144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:51.303202 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:51.328091 1311248 cri.go:89] found id: ""
	I1218 00:36:51.328105 1311248 logs.go:282] 0 containers: []
	W1218 00:36:51.328112 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:51.328120 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:51.328130 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:51.385226 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:51.385249 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:51.400951 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:51.400967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:51.479293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:51.470350   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.470962   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.472611   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.473295   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:51.474905   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:51.479304 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:51.479315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:51.541268 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:51.541288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.069293 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:54.080067 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:54.080153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:54.106375 1311248 cri.go:89] found id: ""
	I1218 00:36:54.106390 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.106402 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:54.106408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:54.106467 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:54.131767 1311248 cri.go:89] found id: ""
	I1218 00:36:54.131781 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.131788 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:54.131793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:54.131850 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:54.157519 1311248 cri.go:89] found id: ""
	I1218 00:36:54.157534 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.157541 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:54.157546 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:54.157606 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:54.182381 1311248 cri.go:89] found id: ""
	I1218 00:36:54.182396 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.182403 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:54.182408 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:54.182478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:54.211219 1311248 cri.go:89] found id: ""
	I1218 00:36:54.211234 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.211241 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:54.211247 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:54.211323 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:54.243605 1311248 cri.go:89] found id: ""
	I1218 00:36:54.243627 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.243634 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:54.243640 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:54.243710 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:54.268614 1311248 cri.go:89] found id: ""
	I1218 00:36:54.268648 1311248 logs.go:282] 0 containers: []
	W1218 00:36:54.268655 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:54.268664 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:54.268675 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:54.332655 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:54.324645   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.325064   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326586   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.326911   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:54.328406   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:54.332668 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:54.332679 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:54.396896 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:54.396916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:54.440350 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:54.440371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:54.503158 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:54.503178 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.019672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:57.030198 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:57.030268 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:57.059845 1311248 cri.go:89] found id: ""
	I1218 00:36:57.059859 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.059866 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:57.059872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:57.059939 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:36:57.086203 1311248 cri.go:89] found id: ""
	I1218 00:36:57.086217 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.086224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:36:57.086229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:36:57.086326 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:36:57.115321 1311248 cri.go:89] found id: ""
	I1218 00:36:57.115335 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.115342 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:36:57.115347 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:36:57.115416 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:36:57.141717 1311248 cri.go:89] found id: ""
	I1218 00:36:57.141731 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.141738 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:36:57.141743 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:36:57.141801 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:36:57.166376 1311248 cri.go:89] found id: ""
	I1218 00:36:57.166389 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.166396 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:36:57.166400 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:36:57.166470 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:36:57.194461 1311248 cri.go:89] found id: ""
	I1218 00:36:57.194475 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.194494 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:36:57.194500 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:36:57.194557 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:36:57.219267 1311248 cri.go:89] found id: ""
	I1218 00:36:57.219280 1311248 logs.go:282] 0 containers: []
	W1218 00:36:57.219287 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:36:57.219295 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:36:57.219305 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:36:57.274913 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:36:57.274932 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:36:57.290015 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:36:57.290032 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:36:57.353493 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:36:57.344799   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.345456   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347207   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.347788   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:36:57.349492   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:36:57.353504 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:36:57.353514 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:36:57.424372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:36:57.424400 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:36:59.955778 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:36:59.965801 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:36:59.965861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:36:59.993708 1311248 cri.go:89] found id: ""
	I1218 00:36:59.993722 1311248 logs.go:282] 0 containers: []
	W1218 00:36:59.993729 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:36:59.993734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:36:59.993792 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:00.055250 1311248 cri.go:89] found id: ""
	I1218 00:37:00.055266 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.055274 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:00.055280 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:00.055388 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:00.117792 1311248 cri.go:89] found id: ""
	I1218 00:37:00.117810 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.117818 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:00.117824 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:00.117903 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:00.170362 1311248 cri.go:89] found id: ""
	I1218 00:37:00.170378 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.170394 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:00.170401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:00.170482 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:00.229984 1311248 cri.go:89] found id: ""
	I1218 00:37:00.230002 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.230010 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:00.230015 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:00.230094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:00.264809 1311248 cri.go:89] found id: ""
	I1218 00:37:00.264826 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.264833 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:00.264839 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:00.264908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:00.313700 1311248 cri.go:89] found id: ""
	I1218 00:37:00.313718 1311248 logs.go:282] 0 containers: []
	W1218 00:37:00.313725 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:00.313734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:00.313747 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:00.390802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:00.390825 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:00.428189 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:00.428207 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:00.494729 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:00.494750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:00.511226 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:00.511245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:00.579855 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:00.571615   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.572526   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574023   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.574461   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:00.575927   11674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.080114 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:03.090701 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:03.090768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:03.123581 1311248 cri.go:89] found id: ""
	I1218 00:37:03.123596 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.123603 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:03.123608 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:03.123666 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:03.148602 1311248 cri.go:89] found id: ""
	I1218 00:37:03.148615 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.148657 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:03.148662 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:03.148733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:03.174826 1311248 cri.go:89] found id: ""
	I1218 00:37:03.174840 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.174848 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:03.174853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:03.174927 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:03.200912 1311248 cri.go:89] found id: ""
	I1218 00:37:03.200926 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.200933 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:03.200939 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:03.200998 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:03.226151 1311248 cri.go:89] found id: ""
	I1218 00:37:03.226166 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.226173 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:03.226179 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:03.226237 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:03.253785 1311248 cri.go:89] found id: ""
	I1218 00:37:03.253799 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.253806 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:03.253812 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:03.253878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:03.279482 1311248 cri.go:89] found id: ""
	I1218 00:37:03.279495 1311248 logs.go:282] 0 containers: []
	W1218 00:37:03.279502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:03.279510 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:03.279521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:03.294545 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:03.294563 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:03.360050 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:03.351618   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.352118   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.353740   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.354377   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:03.356009   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:03.360059 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:03.360071 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:03.423132 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:03.423151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:03.461805 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:03.461820 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.018802 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:06.030336 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:06.030406 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:06.056426 1311248 cri.go:89] found id: ""
	I1218 00:37:06.056440 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.056447 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:06.056453 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:06.056513 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:06.086319 1311248 cri.go:89] found id: ""
	I1218 00:37:06.086333 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.086341 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:06.086346 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:06.086413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:06.112062 1311248 cri.go:89] found id: ""
	I1218 00:37:06.112077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.112084 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:06.112089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:06.112157 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:06.137317 1311248 cri.go:89] found id: ""
	I1218 00:37:06.137331 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.137344 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:06.137351 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:06.137419 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:06.165090 1311248 cri.go:89] found id: ""
	I1218 00:37:06.165104 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.165111 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:06.165116 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:06.165174 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:06.190738 1311248 cri.go:89] found id: ""
	I1218 00:37:06.190753 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.190759 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:06.190765 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:06.190822 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:06.215038 1311248 cri.go:89] found id: ""
	I1218 00:37:06.215066 1311248 logs.go:282] 0 containers: []
	W1218 00:37:06.215075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:06.215083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:06.215094 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:06.270893 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:06.270915 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:06.285817 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:06.285834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:06.354768 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:06.346564   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.347477   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349158   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.349463   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:06.350911   11866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:06.354777 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:06.354787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:06.416937 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:06.416957 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:08.951149 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:08.961238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:08.961297 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:08.985900 1311248 cri.go:89] found id: ""
	I1218 00:37:08.985916 1311248 logs.go:282] 0 containers: []
	W1218 00:37:08.985923 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:08.985928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:08.985993 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:09.016022 1311248 cri.go:89] found id: ""
	I1218 00:37:09.016036 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.016043 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:09.016048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:09.016106 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:09.040820 1311248 cri.go:89] found id: ""
	I1218 00:37:09.040841 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.040849 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:09.040853 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:09.040912 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:09.065452 1311248 cri.go:89] found id: ""
	I1218 00:37:09.065466 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.065473 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:09.065478 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:09.065539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:09.095062 1311248 cri.go:89] found id: ""
	I1218 00:37:09.095077 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.095083 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:09.095089 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:09.095151 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:09.120274 1311248 cri.go:89] found id: ""
	I1218 00:37:09.120287 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.120294 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:09.120300 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:09.120366 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:09.144652 1311248 cri.go:89] found id: ""
	I1218 00:37:09.144667 1311248 logs.go:282] 0 containers: []
	W1218 00:37:09.144674 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:09.144683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:09.144700 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:09.159355 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:09.159371 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:09.224560 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:09.215599   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.216102   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.217785   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.218180   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:09.219658   11969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:09.224571 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:09.224582 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:09.286931 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:09.286951 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:09.318873 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:09.318888 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:11.876699 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:11.887524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:11.887583 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:11.913617 1311248 cri.go:89] found id: ""
	I1218 00:37:11.913631 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.913638 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:11.913643 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:11.913701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:11.942203 1311248 cri.go:89] found id: ""
	I1218 00:37:11.942219 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.942226 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:11.942231 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:11.942292 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:11.967671 1311248 cri.go:89] found id: ""
	I1218 00:37:11.967685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.967692 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:11.967697 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:11.967766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:11.992422 1311248 cri.go:89] found id: ""
	I1218 00:37:11.992437 1311248 logs.go:282] 0 containers: []
	W1218 00:37:11.992443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:11.992448 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:11.992505 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:12.031034 1311248 cri.go:89] found id: ""
	I1218 00:37:12.031049 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.031056 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:12.031061 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:12.031119 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:12.057654 1311248 cri.go:89] found id: ""
	I1218 00:37:12.057669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.057677 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:12.057682 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:12.057764 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:12.082063 1311248 cri.go:89] found id: ""
	I1218 00:37:12.082078 1311248 logs.go:282] 0 containers: []
	W1218 00:37:12.082084 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:12.082092 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:12.082102 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:12.111103 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:12.111119 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:12.168426 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:12.168446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:12.183407 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:12.183423 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:12.251784 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:12.243223   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.243928   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.245555   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.246042   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:12.247684   12087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:12.251803 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:12.251814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:14.823080 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:14.834459 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:14.834525 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:14.860258 1311248 cri.go:89] found id: ""
	I1218 00:37:14.860272 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.860278 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:14.860283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:14.860341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:14.884703 1311248 cri.go:89] found id: ""
	I1218 00:37:14.884722 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.884729 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:14.884734 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:14.884794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:14.909031 1311248 cri.go:89] found id: ""
	I1218 00:37:14.909046 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.909054 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:14.909059 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:14.909130 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:14.934504 1311248 cri.go:89] found id: ""
	I1218 00:37:14.934518 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.934525 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:14.934531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:14.934590 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:14.965623 1311248 cri.go:89] found id: ""
	I1218 00:37:14.965638 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.965646 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:14.965651 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:14.965718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:14.991607 1311248 cri.go:89] found id: ""
	I1218 00:37:14.991623 1311248 logs.go:282] 0 containers: []
	W1218 00:37:14.991631 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:14.991636 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:14.991711 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:15.027331 1311248 cri.go:89] found id: ""
	I1218 00:37:15.027347 1311248 logs.go:282] 0 containers: []
	W1218 00:37:15.027355 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:15.027364 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:15.027376 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:15.102509 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:15.094618   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.095457   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.096397   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.097224   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:15.098386   12173 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:15.102519 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:15.102530 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:15.167080 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:15.167101 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:15.200488 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:15.200504 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:15.261320 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:15.261342 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:17.777092 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:17.788005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:17.788070 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:17.813820 1311248 cri.go:89] found id: ""
	I1218 00:37:17.813834 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.813841 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:17.813846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:17.813906 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:17.841574 1311248 cri.go:89] found id: ""
	I1218 00:37:17.841588 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.841605 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:17.841610 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:17.841679 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:17.865628 1311248 cri.go:89] found id: ""
	I1218 00:37:17.865644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.865650 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:17.865656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:17.865713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:17.891259 1311248 cri.go:89] found id: ""
	I1218 00:37:17.891273 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.891289 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:17.891295 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:17.891363 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:17.918377 1311248 cri.go:89] found id: ""
	I1218 00:37:17.918391 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.918398 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:17.918403 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:17.918461 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:17.948139 1311248 cri.go:89] found id: ""
	I1218 00:37:17.948171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.948178 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:17.948183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:17.948251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:17.971855 1311248 cri.go:89] found id: ""
	I1218 00:37:17.971869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:17.971876 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:17.971884 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:17.971894 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:18.026594 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:18.026614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:18.042303 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:18.042328 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:18.108683 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:18.100223   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.100970   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.102555   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.103125   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:18.104779   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:18.108704 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:18.108729 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:18.172657 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:18.172676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:20.704818 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:20.715060 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:20.715120 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:20.741147 1311248 cri.go:89] found id: ""
	I1218 00:37:20.741161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.741168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:20.741174 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:20.741231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:20.765846 1311248 cri.go:89] found id: ""
	I1218 00:37:20.765860 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.765867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:20.765872 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:20.765930 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:20.795338 1311248 cri.go:89] found id: ""
	I1218 00:37:20.795351 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.795358 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:20.795364 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:20.795421 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:20.823054 1311248 cri.go:89] found id: ""
	I1218 00:37:20.823068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.823075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:20.823080 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:20.823137 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:20.848186 1311248 cri.go:89] found id: ""
	I1218 00:37:20.848200 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.848208 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:20.848213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:20.848278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:20.872642 1311248 cri.go:89] found id: ""
	I1218 00:37:20.872656 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.872662 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:20.872668 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:20.872771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:20.897151 1311248 cri.go:89] found id: ""
	I1218 00:37:20.897165 1311248 logs.go:282] 0 containers: []
	W1218 00:37:20.897172 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:20.897180 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:20.897190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:20.951948 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:20.951968 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:20.966927 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:20.966943 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:21.033275 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:21.024825   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.025249   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.026815   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.028251   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:21.029400   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:21.033286 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:21.033296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:21.096425 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:21.096445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.624716 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:23.635084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:23.635160 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:23.668648 1311248 cri.go:89] found id: ""
	I1218 00:37:23.668662 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.668670 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:23.668675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:23.668755 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:23.700454 1311248 cri.go:89] found id: ""
	I1218 00:37:23.700468 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.700475 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:23.700480 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:23.700538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:23.732021 1311248 cri.go:89] found id: ""
	I1218 00:37:23.732035 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.732043 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:23.732048 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:23.732124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:23.760854 1311248 cri.go:89] found id: ""
	I1218 00:37:23.760868 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.760875 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:23.760881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:23.760942 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:23.786164 1311248 cri.go:89] found id: ""
	I1218 00:37:23.786178 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.786185 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:23.786189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:23.786248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:23.811196 1311248 cri.go:89] found id: ""
	I1218 00:37:23.811220 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.811229 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:23.811234 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:23.811300 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:23.835282 1311248 cri.go:89] found id: ""
	I1218 00:37:23.835297 1311248 logs.go:282] 0 containers: []
	W1218 00:37:23.835314 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:23.835323 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:23.835334 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:23.899950 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:23.891162   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.891981   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893473   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.893917   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:23.895404   12490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:23.899970 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:23.899981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:23.966454 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:23.966474 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:23.994564 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:23.994580 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:24.052734 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:24.052755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.568298 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:26.578561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:26.578622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:26.602733 1311248 cri.go:89] found id: ""
	I1218 00:37:26.602747 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.602755 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:26.602761 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:26.602826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:26.631092 1311248 cri.go:89] found id: ""
	I1218 00:37:26.631106 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.631113 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:26.631118 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:26.631180 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:26.677513 1311248 cri.go:89] found id: ""
	I1218 00:37:26.677528 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.677536 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:26.677541 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:26.677608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:26.712071 1311248 cri.go:89] found id: ""
	I1218 00:37:26.712085 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.712093 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:26.712100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:26.712167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:26.738769 1311248 cri.go:89] found id: ""
	I1218 00:37:26.738783 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.738790 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:26.738795 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:26.738857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:26.764344 1311248 cri.go:89] found id: ""
	I1218 00:37:26.764358 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.764365 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:26.764370 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:26.764428 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:26.790276 1311248 cri.go:89] found id: ""
	I1218 00:37:26.790290 1311248 logs.go:282] 0 containers: []
	W1218 00:37:26.790297 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:26.790305 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:26.790315 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:26.845607 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:26.845626 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:26.861063 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:26.861080 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:26.931574 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:26.923282   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.923912   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925418   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.925858   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:26.927306   12600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:26.931584 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:26.931595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:26.998426 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:26.998445 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:29.540997 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:29.551044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:29.551103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:29.575146 1311248 cri.go:89] found id: ""
	I1218 00:37:29.575161 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.575168 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:29.575173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:29.575230 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:29.599039 1311248 cri.go:89] found id: ""
	I1218 00:37:29.599052 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.599059 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:29.599064 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:29.599123 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:29.623971 1311248 cri.go:89] found id: ""
	I1218 00:37:29.623985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.623993 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:29.623998 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:29.624057 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:29.653653 1311248 cri.go:89] found id: ""
	I1218 00:37:29.653669 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.653675 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:29.653681 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:29.653754 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:29.687572 1311248 cri.go:89] found id: ""
	I1218 00:37:29.687586 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.687593 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:29.687599 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:29.687670 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:29.725789 1311248 cri.go:89] found id: ""
	I1218 00:37:29.725803 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.725811 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:29.725816 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:29.725878 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:29.753212 1311248 cri.go:89] found id: ""
	I1218 00:37:29.753226 1311248 logs.go:282] 0 containers: []
	W1218 00:37:29.753233 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:29.753241 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:29.753253 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:29.810976 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:29.810996 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:29.825952 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:29.825969 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:29.893717 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:29.885172   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.885837   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.887444   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.888114   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:29.889881   12703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:29.893736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:29.893748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:29.959773 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:29.959794 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:32.492460 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:32.502745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:32.502807 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:32.528416 1311248 cri.go:89] found id: ""
	I1218 00:37:32.528431 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.528438 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:32.528443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:32.528501 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:32.553770 1311248 cri.go:89] found id: ""
	I1218 00:37:32.553785 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.553792 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:32.553798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:32.553861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:32.577941 1311248 cri.go:89] found id: ""
	I1218 00:37:32.577956 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.577963 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:32.577969 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:32.578028 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:32.604043 1311248 cri.go:89] found id: ""
	I1218 00:37:32.604058 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.604075 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:32.604081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:32.604159 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:32.629080 1311248 cri.go:89] found id: ""
	I1218 00:37:32.629095 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.629102 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:32.629108 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:32.629167 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:32.664156 1311248 cri.go:89] found id: ""
	I1218 00:37:32.664171 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.664187 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:32.664193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:32.664281 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:32.692107 1311248 cri.go:89] found id: ""
	I1218 00:37:32.692141 1311248 logs.go:282] 0 containers: []
	W1218 00:37:32.692149 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:32.692158 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:32.692168 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:32.758211 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:32.758238 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:32.774028 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:32.774047 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:32.839724 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:32.829408   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.831067   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.832031   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.833732   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:32.834447   12810 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:32.839734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:32.839749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:32.905609 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:32.905633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:35.434204 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:35.445035 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:35.445099 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:35.470531 1311248 cri.go:89] found id: ""
	I1218 00:37:35.470545 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.470553 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:35.470558 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:35.470621 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:35.494976 1311248 cri.go:89] found id: ""
	I1218 00:37:35.494990 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.494996 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:35.495001 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:35.495063 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:35.519629 1311248 cri.go:89] found id: ""
	I1218 00:37:35.519644 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.519651 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:35.519656 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:35.519714 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:35.544438 1311248 cri.go:89] found id: ""
	I1218 00:37:35.544453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.544460 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:35.544465 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:35.544523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:35.569684 1311248 cri.go:89] found id: ""
	I1218 00:37:35.569699 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.569706 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:35.569712 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:35.569771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:35.595541 1311248 cri.go:89] found id: ""
	I1218 00:37:35.595556 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.595563 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:35.595568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:35.595632 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:35.620307 1311248 cri.go:89] found id: ""
	I1218 00:37:35.620321 1311248 logs.go:282] 0 containers: []
	W1218 00:37:35.620328 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:35.620336 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:35.620346 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:35.678927 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:35.678945 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:35.697469 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:35.697488 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:35.774692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:35.766317   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.766971   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768479   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.768968   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:35.770457   12919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:35.774703 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:35.774713 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:35.836772 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:35.836792 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:38.369786 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:38.380243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:38.380304 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:38.406412 1311248 cri.go:89] found id: ""
	I1218 00:37:38.406426 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.406433 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:38.406439 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:38.406497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:38.431433 1311248 cri.go:89] found id: ""
	I1218 00:37:38.431447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.431454 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:38.431460 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:38.431518 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:38.455854 1311248 cri.go:89] found id: ""
	I1218 00:37:38.455869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.455876 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:38.455881 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:38.455943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:38.480414 1311248 cri.go:89] found id: ""
	I1218 00:37:38.480428 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.480435 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:38.480440 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:38.480497 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:38.506521 1311248 cri.go:89] found id: ""
	I1218 00:37:38.506535 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.506551 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:38.506557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:38.506630 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:38.531738 1311248 cri.go:89] found id: ""
	I1218 00:37:38.531762 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.531769 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:38.531774 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:38.531840 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:38.557054 1311248 cri.go:89] found id: ""
	I1218 00:37:38.557068 1311248 logs.go:282] 0 containers: []
	W1218 00:37:38.557075 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:38.557083 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:38.557092 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:38.613102 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:38.613120 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:38.627653 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:38.627670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:38.723568 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:38.715053   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.715674   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717355   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.717806   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:38.719420   13020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:38.723579 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:38.723591 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:38.784988 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:38.785008 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:41.315880 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:41.326378 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:41.326457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:41.351366 1311248 cri.go:89] found id: ""
	I1218 00:37:41.351381 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.351390 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:41.351395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:41.351454 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:41.376110 1311248 cri.go:89] found id: ""
	I1218 00:37:41.376124 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.376131 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:41.376137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:41.376192 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:41.401062 1311248 cri.go:89] found id: ""
	I1218 00:37:41.401075 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.401082 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:41.401087 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:41.401146 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:41.425454 1311248 cri.go:89] found id: ""
	I1218 00:37:41.425469 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.425475 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:41.425481 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:41.425539 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:41.454711 1311248 cri.go:89] found id: ""
	I1218 00:37:41.454724 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.454732 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:41.454737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:41.454799 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:41.479667 1311248 cri.go:89] found id: ""
	I1218 00:37:41.479681 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.479688 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:41.479694 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:41.479752 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:41.504248 1311248 cri.go:89] found id: ""
	I1218 00:37:41.504261 1311248 logs.go:282] 0 containers: []
	W1218 00:37:41.504268 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:41.504276 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:41.504323 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:41.559589 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:41.559609 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:41.574018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:41.574034 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:41.637175 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:41.628415   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.628972   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.630734   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.631095   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:41.632663   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:41.637186 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:41.637196 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:41.712099 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:41.712122 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.243063 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:44.253213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:44.253272 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:44.278124 1311248 cri.go:89] found id: ""
	I1218 00:37:44.278138 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.278145 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:44.278150 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:44.278211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:44.302729 1311248 cri.go:89] found id: ""
	I1218 00:37:44.302743 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.302750 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:44.302755 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:44.302813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:44.327369 1311248 cri.go:89] found id: ""
	I1218 00:37:44.327384 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.327391 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:44.327396 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:44.327458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:44.351769 1311248 cri.go:89] found id: ""
	I1218 00:37:44.351784 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.351791 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:44.351796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:44.351858 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:44.378488 1311248 cri.go:89] found id: ""
	I1218 00:37:44.378502 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.378509 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:44.378514 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:44.378574 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:44.404134 1311248 cri.go:89] found id: ""
	I1218 00:37:44.404149 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.404156 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:44.404161 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:44.404219 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:44.428529 1311248 cri.go:89] found id: ""
	I1218 00:37:44.428543 1311248 logs.go:282] 0 containers: []
	W1218 00:37:44.428551 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:44.428559 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:44.428570 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:44.443196 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:44.443212 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:44.505692 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:44.497164   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.497880   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.499622   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.500221   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:44.501801   13227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:44.505702 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:44.505712 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:44.571665 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:44.571686 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:44.600535 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:44.600553 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.157844 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:47.168414 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:47.168474 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:47.197971 1311248 cri.go:89] found id: ""
	I1218 00:37:47.197985 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.197992 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:47.197997 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:47.198054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:47.223237 1311248 cri.go:89] found id: ""
	I1218 00:37:47.223251 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.223258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:47.223263 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:47.223322 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:47.251998 1311248 cri.go:89] found id: ""
	I1218 00:37:47.252018 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.252025 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:47.252031 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:47.252089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:47.275741 1311248 cri.go:89] found id: ""
	I1218 00:37:47.275755 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.275764 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:47.275769 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:47.275826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:47.302583 1311248 cri.go:89] found id: ""
	I1218 00:37:47.302597 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.302604 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:47.302609 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:47.302665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:47.327501 1311248 cri.go:89] found id: ""
	I1218 00:37:47.327516 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.327523 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:47.327528 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:47.327594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:47.352433 1311248 cri.go:89] found id: ""
	I1218 00:37:47.352447 1311248 logs.go:282] 0 containers: []
	W1218 00:37:47.352454 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:47.352463 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:47.352473 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:47.410340 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:47.410362 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:47.425365 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:47.425388 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:47.492532 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:47.484205   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.484730   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486231   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.486712   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:47.488178   13334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:47.492542 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:47.492562 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:47.553805 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:47.553828 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.086246 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:50.097136 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:50.097206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:50.124671 1311248 cri.go:89] found id: ""
	I1218 00:37:50.124685 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.124693 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:50.124698 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:50.124766 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:50.150439 1311248 cri.go:89] found id: ""
	I1218 00:37:50.150453 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.150460 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:50.150464 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:50.150523 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:50.174899 1311248 cri.go:89] found id: ""
	I1218 00:37:50.174913 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.174921 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:50.174926 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:50.174992 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:50.200398 1311248 cri.go:89] found id: ""
	I1218 00:37:50.200412 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.200420 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:50.200425 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:50.200486 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:50.226325 1311248 cri.go:89] found id: ""
	I1218 00:37:50.226338 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.226345 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:50.226350 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:50.226409 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:50.251194 1311248 cri.go:89] found id: ""
	I1218 00:37:50.251208 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.251215 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:50.251220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:50.251287 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:50.278029 1311248 cri.go:89] found id: ""
	I1218 00:37:50.278043 1311248 logs.go:282] 0 containers: []
	W1218 00:37:50.278050 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:50.278057 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:50.278067 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:50.338421 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:50.338443 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:50.368542 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:50.368565 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:50.423715 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:50.423734 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:50.438292 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:50.438308 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:50.499550 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:50.491066   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.491885   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493525   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.493818   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:50.495259   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:52.999811 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:53.011389 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:53.011453 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:53.036842 1311248 cri.go:89] found id: ""
	I1218 00:37:53.036861 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.036869 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:53.036884 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:53.036981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:53.069368 1311248 cri.go:89] found id: ""
	I1218 00:37:53.069383 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.069391 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:53.069397 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:53.069458 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:53.093990 1311248 cri.go:89] found id: ""
	I1218 00:37:53.094004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.094011 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:53.094016 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:53.094076 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:53.119386 1311248 cri.go:89] found id: ""
	I1218 00:37:53.119400 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.119417 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:53.119423 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:53.119487 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:53.144979 1311248 cri.go:89] found id: ""
	I1218 00:37:53.144992 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.144999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:53.145005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:53.145062 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:53.171485 1311248 cri.go:89] found id: ""
	I1218 00:37:53.171499 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.171506 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:53.171512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:53.171570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:53.198517 1311248 cri.go:89] found id: ""
	I1218 00:37:53.198530 1311248 logs.go:282] 0 containers: []
	W1218 00:37:53.198537 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:53.198545 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:53.198556 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:53.225701 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:53.225719 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:53.280281 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:53.280300 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:53.295217 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:53.295235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:53.360920 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:53.352238   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.352952   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.354786   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.355181   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:53.356802   13558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:53.360930 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:53.360940 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:55.923673 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:55.935823 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:55.935880 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:55.963196 1311248 cri.go:89] found id: ""
	I1218 00:37:55.963210 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.963217 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:55.963222 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:55.963278 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:55.992688 1311248 cri.go:89] found id: ""
	I1218 00:37:55.992701 1311248 logs.go:282] 0 containers: []
	W1218 00:37:55.992708 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:55.992713 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:55.992778 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:56.032683 1311248 cri.go:89] found id: ""
	I1218 00:37:56.032696 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.032705 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:56.032711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:56.032779 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:56.061554 1311248 cri.go:89] found id: ""
	I1218 00:37:56.061568 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.061575 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:56.061580 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:56.061639 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:56.090855 1311248 cri.go:89] found id: ""
	I1218 00:37:56.090869 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.090877 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:56.090882 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:56.090943 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:56.115990 1311248 cri.go:89] found id: ""
	I1218 00:37:56.116004 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.116020 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:56.116026 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:56.116085 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:56.141361 1311248 cri.go:89] found id: ""
	I1218 00:37:56.141385 1311248 logs.go:282] 0 containers: []
	W1218 00:37:56.141393 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:56.141401 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:56.141412 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:56.202998 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:56.194857   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.195410   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197059   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.197614   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:56.199168   13645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:56.203008 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:56.203019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:56.263974 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:56.263994 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:37:56.295494 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:56.295509 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:56.350431 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:56.350450 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:58.867454 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:37:58.877799 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:37:58.877861 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:37:58.929615 1311248 cri.go:89] found id: ""
	I1218 00:37:58.929629 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.929636 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:37:58.929642 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:37:58.929701 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:37:58.958880 1311248 cri.go:89] found id: ""
	I1218 00:37:58.958894 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.958900 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:37:58.958906 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:37:58.958965 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:37:58.983460 1311248 cri.go:89] found id: ""
	I1218 00:37:58.983475 1311248 logs.go:282] 0 containers: []
	W1218 00:37:58.983482 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:37:58.983487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:37:58.983547 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:37:59.009476 1311248 cri.go:89] found id: ""
	I1218 00:37:59.009490 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.009497 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:37:59.009503 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:37:59.009563 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:37:59.033436 1311248 cri.go:89] found id: ""
	I1218 00:37:59.033450 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.033457 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:37:59.033462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:37:59.033522 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:37:59.058635 1311248 cri.go:89] found id: ""
	I1218 00:37:59.058649 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.058656 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:37:59.058661 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:37:59.058719 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:37:59.082644 1311248 cri.go:89] found id: ""
	I1218 00:37:59.082658 1311248 logs.go:282] 0 containers: []
	W1218 00:37:59.082666 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:37:59.082673 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:37:59.082684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:37:59.138067 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:37:59.138085 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:37:59.154868 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:37:59.154884 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:37:59.232032 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:37:59.223238   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.223623   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225341   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.225946   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:37:59.227686   13756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:37:59.232043 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:37:59.232061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:37:59.297264 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:37:59.297288 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:01.827672 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:01.838270 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:01.838330 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:01.862836 1311248 cri.go:89] found id: ""
	I1218 00:38:01.862855 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.862862 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:01.862867 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:01.862925 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:01.892782 1311248 cri.go:89] found id: ""
	I1218 00:38:01.892797 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.892804 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:01.892810 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:01.892876 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:01.919043 1311248 cri.go:89] found id: ""
	I1218 00:38:01.919068 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.919076 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:01.919081 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:01.919148 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:01.945252 1311248 cri.go:89] found id: ""
	I1218 00:38:01.945267 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.945285 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:01.945291 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:01.945368 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:01.974338 1311248 cri.go:89] found id: ""
	I1218 00:38:01.974353 1311248 logs.go:282] 0 containers: []
	W1218 00:38:01.974361 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:01.974366 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:01.974433 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:02.003307 1311248 cri.go:89] found id: ""
	I1218 00:38:02.003324 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.003332 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:02.003339 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:02.003423 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:02.030938 1311248 cri.go:89] found id: ""
	I1218 00:38:02.030953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:02.030960 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:02.030968 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:02.030979 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:02.100511 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:02.091512   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.092078   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.093817   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.094344   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:02.095821   13854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:02.100521 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:02.100531 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:02.162112 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:02.162132 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:02.191957 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:02.191976 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:02.248095 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:02.248116 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:04.765008 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:04.775100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:04.775168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:04.799097 1311248 cri.go:89] found id: ""
	I1218 00:38:04.799125 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.799132 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:04.799137 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:04.799206 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:04.826968 1311248 cri.go:89] found id: ""
	I1218 00:38:04.826993 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.827000 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:04.827005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:04.827083 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:04.860005 1311248 cri.go:89] found id: ""
	I1218 00:38:04.860020 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.860027 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:04.860032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:04.860103 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:04.886293 1311248 cri.go:89] found id: ""
	I1218 00:38:04.886307 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.886315 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:04.886320 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:04.886385 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:04.918579 1311248 cri.go:89] found id: ""
	I1218 00:38:04.918594 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.918601 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:04.918607 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:04.918676 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:04.945152 1311248 cri.go:89] found id: ""
	I1218 00:38:04.945167 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.945183 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:04.945189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:04.945258 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:04.976410 1311248 cri.go:89] found id: ""
	I1218 00:38:04.976424 1311248 logs.go:282] 0 containers: []
	W1218 00:38:04.976432 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:04.976439 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:04.976449 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:05.032080 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:05.032100 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:05.047379 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:05.047396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:05.113965 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:05.105127   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.105769   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.107565   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.108085   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:05.109817   13969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:05.113975 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:05.113986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:05.174878 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:05.174897 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:07.706926 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:07.717077 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:07.717140 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:07.741430 1311248 cri.go:89] found id: ""
	I1218 00:38:07.741464 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.741471 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:07.741477 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:07.741538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:07.766770 1311248 cri.go:89] found id: ""
	I1218 00:38:07.766784 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.766791 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:07.766796 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:07.766855 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:07.790902 1311248 cri.go:89] found id: ""
	I1218 00:38:07.790917 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.790924 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:07.790929 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:07.791005 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:07.819681 1311248 cri.go:89] found id: ""
	I1218 00:38:07.819696 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.819703 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:07.819708 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:07.819770 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:07.844498 1311248 cri.go:89] found id: ""
	I1218 00:38:07.844512 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.844519 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:07.844524 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:07.844584 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:07.870028 1311248 cri.go:89] found id: ""
	I1218 00:38:07.870043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.870050 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:07.870057 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:07.870125 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:07.906969 1311248 cri.go:89] found id: ""
	I1218 00:38:07.906984 1311248 logs.go:282] 0 containers: []
	W1218 00:38:07.906999 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:07.907007 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:07.907017 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:07.974278 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:07.974306 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:07.989533 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:07.989551 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:08.055867 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:08.047282   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.048178   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.049950   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.050299   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:08.051821   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:08.055877 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:08.055889 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:08.118669 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:08.118693 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:10.651292 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:10.663394 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:10.663471 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:10.687520 1311248 cri.go:89] found id: ""
	I1218 00:38:10.687534 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.687542 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:10.687547 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:10.687608 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:10.713147 1311248 cri.go:89] found id: ""
	I1218 00:38:10.713161 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.713168 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:10.713173 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:10.713231 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:10.737926 1311248 cri.go:89] found id: ""
	I1218 00:38:10.737940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.737948 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:10.737953 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:10.738012 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:10.763422 1311248 cri.go:89] found id: ""
	I1218 00:38:10.763436 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.763443 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:10.763449 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:10.763508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:10.788619 1311248 cri.go:89] found id: ""
	I1218 00:38:10.788659 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.788672 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:10.788677 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:10.788738 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:10.813718 1311248 cri.go:89] found id: ""
	I1218 00:38:10.813732 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.813740 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:10.813745 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:10.813803 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:10.837575 1311248 cri.go:89] found id: ""
	I1218 00:38:10.837588 1311248 logs.go:282] 0 containers: []
	W1218 00:38:10.837595 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:10.837603 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:10.837614 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:10.852133 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:10.852149 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:10.917780 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:10.909596   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.910424   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912063   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.912382   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:10.913799   14171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:10.917791 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:10.917801 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:10.987674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:10.987695 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:11.024530 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:11.024549 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.581947 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:13.592491 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:13.592556 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:13.617579 1311248 cri.go:89] found id: ""
	I1218 00:38:13.617593 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.617600 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:13.617605 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:13.617665 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:13.641975 1311248 cri.go:89] found id: ""
	I1218 00:38:13.641990 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.641997 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:13.642002 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:13.642060 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:13.667128 1311248 cri.go:89] found id: ""
	I1218 00:38:13.667142 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.667149 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:13.667154 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:13.667215 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:13.699564 1311248 cri.go:89] found id: ""
	I1218 00:38:13.699579 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.699586 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:13.699591 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:13.699655 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:13.727620 1311248 cri.go:89] found id: ""
	I1218 00:38:13.727634 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.727641 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:13.727646 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:13.727703 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:13.756118 1311248 cri.go:89] found id: ""
	I1218 00:38:13.756132 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.756138 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:13.756144 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:13.756204 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:13.780706 1311248 cri.go:89] found id: ""
	I1218 00:38:13.780720 1311248 logs.go:282] 0 containers: []
	W1218 00:38:13.780728 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:13.780736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:13.780746 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:13.842845 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:13.842864 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:13.871826 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:13.871843 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:13.932300 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:13.932319 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:13.950089 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:13.950106 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:14.022114 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:14.013202   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.013842   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.015677   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.016243   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:14.018014   14298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.522391 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:16.534271 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:16.534357 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:16.558729 1311248 cri.go:89] found id: ""
	I1218 00:38:16.558743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.558757 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:16.558762 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:16.558819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:16.587758 1311248 cri.go:89] found id: ""
	I1218 00:38:16.587772 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.587779 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:16.587784 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:16.587841 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:16.612793 1311248 cri.go:89] found id: ""
	I1218 00:38:16.612807 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.612814 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:16.612819 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:16.612907 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:16.637417 1311248 cri.go:89] found id: ""
	I1218 00:38:16.637431 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.637438 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:16.637443 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:16.637508 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:16.662059 1311248 cri.go:89] found id: ""
	I1218 00:38:16.662073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.662080 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:16.662085 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:16.662141 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:16.686710 1311248 cri.go:89] found id: ""
	I1218 00:38:16.686724 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.686731 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:16.686737 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:16.686794 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:16.711539 1311248 cri.go:89] found id: ""
	I1218 00:38:16.711553 1311248 logs.go:282] 0 containers: []
	W1218 00:38:16.711561 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:16.711569 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:16.711579 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:16.739136 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:16.739151 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:16.794672 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:16.794694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:16.809147 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:16.809171 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:16.878702 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:16.870875   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.871602   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873068   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.873374   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:16.874856   14396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:16.878711 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:16.878723 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.444575 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:19.454827 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:19.454887 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:19.482057 1311248 cri.go:89] found id: ""
	I1218 00:38:19.482071 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.482078 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:19.482083 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:19.482142 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:19.505124 1311248 cri.go:89] found id: ""
	I1218 00:38:19.505138 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.505146 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:19.505151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:19.505209 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:19.530010 1311248 cri.go:89] found id: ""
	I1218 00:38:19.530024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.530031 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:19.530037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:19.530094 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:19.555994 1311248 cri.go:89] found id: ""
	I1218 00:38:19.556008 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.556025 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:19.556030 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:19.556087 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:19.580515 1311248 cri.go:89] found id: ""
	I1218 00:38:19.580539 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.580546 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:19.580554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:19.580619 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:19.605333 1311248 cri.go:89] found id: ""
	I1218 00:38:19.605348 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.605354 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:19.605360 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:19.605418 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:19.630483 1311248 cri.go:89] found id: ""
	I1218 00:38:19.630497 1311248 logs.go:282] 0 containers: []
	W1218 00:38:19.630504 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:19.630512 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:19.630522 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:19.693128 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:19.684824   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.685371   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.686899   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.687332   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:19.688942   14481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:19.693138 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:19.693148 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:19.755570 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:19.755590 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:19.785139 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:19.785156 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:19.842579 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:19.842605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.358338 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:22.368724 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:22.368793 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:22.392394 1311248 cri.go:89] found id: ""
	I1218 00:38:22.392408 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.392415 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:22.392420 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:22.392478 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:22.419029 1311248 cri.go:89] found id: ""
	I1218 00:38:22.419043 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.419050 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:22.419055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:22.419117 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:22.443838 1311248 cri.go:89] found id: ""
	I1218 00:38:22.443852 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.443859 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:22.443864 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:22.443923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:22.467780 1311248 cri.go:89] found id: ""
	I1218 00:38:22.467794 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.467801 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:22.467807 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:22.467864 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:22.497254 1311248 cri.go:89] found id: ""
	I1218 00:38:22.497268 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.497276 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:22.497281 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:22.497340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:22.521672 1311248 cri.go:89] found id: ""
	I1218 00:38:22.521686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.521693 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:22.521699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:22.521758 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:22.548085 1311248 cri.go:89] found id: ""
	I1218 00:38:22.548119 1311248 logs.go:282] 0 containers: []
	W1218 00:38:22.548126 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:22.548134 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:22.548144 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:22.614828 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:22.614852 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:22.643447 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:22.643462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:22.698947 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:22.698967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:22.713971 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:22.713986 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:22.789955 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:22.774480   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.775084   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.776811   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.783961   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:22.784735   14604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.290158 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:25.300164 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:25.300226 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:25.323897 1311248 cri.go:89] found id: ""
	I1218 00:38:25.323912 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.323919 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:25.323924 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:25.323985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:25.352232 1311248 cri.go:89] found id: ""
	I1218 00:38:25.352245 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.352252 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:25.352257 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:25.352314 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:25.376749 1311248 cri.go:89] found id: ""
	I1218 00:38:25.376785 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.376792 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:25.376797 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:25.376868 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:25.401002 1311248 cri.go:89] found id: ""
	I1218 00:38:25.401015 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.401023 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:25.401028 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:25.401089 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:25.426497 1311248 cri.go:89] found id: ""
	I1218 00:38:25.426510 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.426517 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:25.426522 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:25.426579 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:25.450505 1311248 cri.go:89] found id: ""
	I1218 00:38:25.450518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.450525 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:25.450536 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:25.450593 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:25.478999 1311248 cri.go:89] found id: ""
	I1218 00:38:25.479013 1311248 logs.go:282] 0 containers: []
	W1218 00:38:25.479029 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:25.479037 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:25.479048 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:25.540968 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:25.532836   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.533607   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535202   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.535518   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:25.537180   14689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:25.540977 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:25.540987 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:25.601527 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:25.601546 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:25.633804 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:25.633826 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:25.691056 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:25.691076 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.206639 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:28.217134 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:28.217198 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:28.242357 1311248 cri.go:89] found id: ""
	I1218 00:38:28.242372 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.242378 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:28.242384 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:28.242449 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:28.271155 1311248 cri.go:89] found id: ""
	I1218 00:38:28.271169 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.271176 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:28.271181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:28.271242 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:28.296330 1311248 cri.go:89] found id: ""
	I1218 00:38:28.296345 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.296352 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:28.296357 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:28.296413 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:28.320425 1311248 cri.go:89] found id: ""
	I1218 00:38:28.320449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.320456 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:28.320461 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:28.320528 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:28.345590 1311248 cri.go:89] found id: ""
	I1218 00:38:28.345603 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.345610 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:28.345625 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:28.345688 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:28.374296 1311248 cri.go:89] found id: ""
	I1218 00:38:28.374310 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.374334 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:28.374340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:28.374407 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:28.397991 1311248 cri.go:89] found id: ""
	I1218 00:38:28.398006 1311248 logs.go:282] 0 containers: []
	W1218 00:38:28.398014 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:28.398023 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:28.398033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:28.453794 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:28.453812 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:28.468531 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:28.468547 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:28.536754 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:28.527630   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529120   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.529993   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531712   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:28.531990   14799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:28.536784 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:28.536796 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:28.599155 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:28.599174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:31.143176 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:31.156254 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:31.156313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:31.185437 1311248 cri.go:89] found id: ""
	I1218 00:38:31.185452 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.185460 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:31.185472 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:31.185531 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:31.215130 1311248 cri.go:89] found id: ""
	I1218 00:38:31.215144 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.215153 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:31.215157 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:31.215217 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:31.240144 1311248 cri.go:89] found id: ""
	I1218 00:38:31.240157 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.240164 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:31.240169 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:31.240227 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:31.265058 1311248 cri.go:89] found id: ""
	I1218 00:38:31.265072 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.265079 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:31.265084 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:31.265150 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:31.289354 1311248 cri.go:89] found id: ""
	I1218 00:38:31.289368 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.289375 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:31.289380 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:31.289438 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:31.319744 1311248 cri.go:89] found id: ""
	I1218 00:38:31.319758 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.319766 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:31.319771 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:31.319826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:31.343739 1311248 cri.go:89] found id: ""
	I1218 00:38:31.343753 1311248 logs.go:282] 0 containers: []
	W1218 00:38:31.343760 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:31.343768 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:31.343778 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:31.399267 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:31.399287 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:31.413578 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:31.413595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:31.478705 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:31.470217   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.470850   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472351   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.472980   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:31.474460   14905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:31.478714 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:31.478724 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:31.540680 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:31.540703 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.068816 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:34.079525 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:34.079589 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:34.106415 1311248 cri.go:89] found id: ""
	I1218 00:38:34.106432 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.106440 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:34.106445 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:34.106506 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:34.131181 1311248 cri.go:89] found id: ""
	I1218 00:38:34.131195 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.131202 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:34.131208 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:34.131265 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:34.166885 1311248 cri.go:89] found id: ""
	I1218 00:38:34.166898 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.166906 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:34.166911 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:34.166970 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:34.197771 1311248 cri.go:89] found id: ""
	I1218 00:38:34.197786 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.197793 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:34.197798 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:34.197856 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:34.226531 1311248 cri.go:89] found id: ""
	I1218 00:38:34.226546 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.226552 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:34.226557 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:34.226614 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:34.252100 1311248 cri.go:89] found id: ""
	I1218 00:38:34.252114 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.252121 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:34.252127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:34.252185 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:34.278653 1311248 cri.go:89] found id: ""
	I1218 00:38:34.278667 1311248 logs.go:282] 0 containers: []
	W1218 00:38:34.278675 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:34.278683 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:34.278694 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:34.293444 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:34.293463 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:34.359201 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:34.350070   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.350710   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.352463   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.353043   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:34.354729   15006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:34.359211 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:34.359221 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:34.420750 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:34.420773 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:34.449621 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:34.449637 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.006206 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:37.019401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:37.019472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:37.047646 1311248 cri.go:89] found id: ""
	I1218 00:38:37.047660 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.047667 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:37.047673 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:37.047733 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:37.076612 1311248 cri.go:89] found id: ""
	I1218 00:38:37.076646 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.076653 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:37.076658 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:37.076717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:37.102368 1311248 cri.go:89] found id: ""
	I1218 00:38:37.102383 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.102390 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:37.102395 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:37.102452 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:37.126829 1311248 cri.go:89] found id: ""
	I1218 00:38:37.126843 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.126850 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:37.126855 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:37.126913 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:37.159965 1311248 cri.go:89] found id: ""
	I1218 00:38:37.159980 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.159987 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:37.159992 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:37.160048 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:37.193535 1311248 cri.go:89] found id: ""
	I1218 00:38:37.193549 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.193558 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:37.193564 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:37.193622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:37.224708 1311248 cri.go:89] found id: ""
	I1218 00:38:37.224723 1311248 logs.go:282] 0 containers: []
	W1218 00:38:37.224730 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:37.224738 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:37.224749 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:37.287765 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:37.279761   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.280395   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282045   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.282472   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:37.283927   15105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:37.287775 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:37.287787 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:37.349218 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:37.349239 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:37.377886 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:37.377902 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:37.435205 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:37.435224 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:39.950327 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:39.960885 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:39.960948 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:39.985573 1311248 cri.go:89] found id: ""
	I1218 00:38:39.985587 1311248 logs.go:282] 0 containers: []
	W1218 00:38:39.985596 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:39.985602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:39.985662 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:40.020843 1311248 cri.go:89] found id: ""
	I1218 00:38:40.020859 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.020867 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:40.020873 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:40.020949 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:40.067991 1311248 cri.go:89] found id: ""
	I1218 00:38:40.068007 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.068015 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:40.068021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:40.068096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:40.097024 1311248 cri.go:89] found id: ""
	I1218 00:38:40.097039 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.097047 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:40.097053 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:40.097118 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:40.127502 1311248 cri.go:89] found id: ""
	I1218 00:38:40.127518 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.127526 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:40.127531 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:40.127595 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:40.165566 1311248 cri.go:89] found id: ""
	I1218 00:38:40.165580 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.165587 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:40.165593 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:40.165660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:40.204927 1311248 cri.go:89] found id: ""
	I1218 00:38:40.204940 1311248 logs.go:282] 0 containers: []
	W1218 00:38:40.204948 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:40.204956 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:40.204967 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:40.222297 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:40.222314 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:40.292382 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:40.283960   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.284578   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286275   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.286834   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:40.288380   15214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:40.292392 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:40.292403 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:40.353852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:40.353871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:40.385828 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:40.385844 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:42.942427 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:42.952937 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:42.952996 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:42.982184 1311248 cri.go:89] found id: ""
	I1218 00:38:42.982201 1311248 logs.go:282] 0 containers: []
	W1218 00:38:42.982208 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:42.982213 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:42.982271 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:43.009928 1311248 cri.go:89] found id: ""
	I1218 00:38:43.009944 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.009952 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:43.009957 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:43.010021 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:43.036384 1311248 cri.go:89] found id: ""
	I1218 00:38:43.036397 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.036405 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:43.036410 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:43.036472 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:43.061945 1311248 cri.go:89] found id: ""
	I1218 00:38:43.061959 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.061967 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:43.061972 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:43.062030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:43.087977 1311248 cri.go:89] found id: ""
	I1218 00:38:43.087992 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.087999 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:43.088005 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:43.088069 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:43.113297 1311248 cri.go:89] found id: ""
	I1218 00:38:43.113312 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.113319 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:43.113324 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:43.113390 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:43.148378 1311248 cri.go:89] found id: ""
	I1218 00:38:43.148392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:43.148399 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:43.148408 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:43.148419 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:43.218202 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:43.218227 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:43.234424 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:43.234441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:43.295849 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:43.287382   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.287819   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289537   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.289959   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:43.291588   15319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:43.295860 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:43.295871 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:43.357903 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:43.357924 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:45.889646 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:45.899918 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:45.899981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:45.923610 1311248 cri.go:89] found id: ""
	I1218 00:38:45.923623 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.923630 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:45.923635 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:45.923696 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:45.949282 1311248 cri.go:89] found id: ""
	I1218 00:38:45.949296 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.949304 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:45.949309 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:45.949371 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:45.974071 1311248 cri.go:89] found id: ""
	I1218 00:38:45.974085 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.974092 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:45.974097 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:45.974153 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:45.997865 1311248 cri.go:89] found id: ""
	I1218 00:38:45.997880 1311248 logs.go:282] 0 containers: []
	W1218 00:38:45.997887 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:45.997892 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:45.997953 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:46.026399 1311248 cri.go:89] found id: ""
	I1218 00:38:46.026413 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.026426 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:46.026432 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:46.026490 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:46.060011 1311248 cri.go:89] found id: ""
	I1218 00:38:46.060026 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.060033 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:46.060038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:46.060097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:46.095378 1311248 cri.go:89] found id: ""
	I1218 00:38:46.095392 1311248 logs.go:282] 0 containers: []
	W1218 00:38:46.095398 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:46.095407 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:46.095418 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:46.110828 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:46.110845 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:46.194637 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:46.185725   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.186782   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.188419   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.189040   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:46.190629   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:46.194647 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:46.194657 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:46.265968 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:46.265989 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:46.298428 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:46.298444 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:48.855794 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:48.868391 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:48.868457 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:48.898010 1311248 cri.go:89] found id: ""
	I1218 00:38:48.898024 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.898032 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:48.898037 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:48.898097 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:48.926962 1311248 cri.go:89] found id: ""
	I1218 00:38:48.926976 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.926984 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:48.926989 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:48.927046 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:48.953073 1311248 cri.go:89] found id: ""
	I1218 00:38:48.953096 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.953104 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:48.953109 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:48.953171 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:48.978527 1311248 cri.go:89] found id: ""
	I1218 00:38:48.978542 1311248 logs.go:282] 0 containers: []
	W1218 00:38:48.978548 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:48.978554 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:48.978611 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:49.005774 1311248 cri.go:89] found id: ""
	I1218 00:38:49.005791 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.005800 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:49.005805 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:49.005881 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:49.032714 1311248 cri.go:89] found id: ""
	I1218 00:38:49.032743 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.032751 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:49.032756 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:49.032845 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:49.058437 1311248 cri.go:89] found id: ""
	I1218 00:38:49.058451 1311248 logs.go:282] 0 containers: []
	W1218 00:38:49.058459 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:49.058468 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:49.058478 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:49.114793 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:49.114813 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:49.129898 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:49.129916 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:49.218168 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:49.209810   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.210315   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212057   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.212459   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:49.213888   15518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:49.218179 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:49.218190 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:49.289574 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:49.289595 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:51.822637 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:51.833100 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:51.833161 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:51.858494 1311248 cri.go:89] found id: ""
	I1218 00:38:51.858508 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.858515 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:51.858520 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:51.858609 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:51.883202 1311248 cri.go:89] found id: ""
	I1218 00:38:51.883217 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.883224 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:51.883229 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:51.883286 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:51.911732 1311248 cri.go:89] found id: ""
	I1218 00:38:51.911746 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.911753 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:51.911758 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:51.911813 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:51.937059 1311248 cri.go:89] found id: ""
	I1218 00:38:51.937073 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.937080 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:51.937086 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:51.937144 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:51.960983 1311248 cri.go:89] found id: ""
	I1218 00:38:51.960998 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.961016 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:51.961021 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:51.961095 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:51.985889 1311248 cri.go:89] found id: ""
	I1218 00:38:51.985904 1311248 logs.go:282] 0 containers: []
	W1218 00:38:51.985911 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:51.985916 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:51.985976 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:52.012132 1311248 cri.go:89] found id: ""
	I1218 00:38:52.012147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:52.012155 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:52.012163 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:52.012174 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:52.080718 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:52.072140   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.072793   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074356   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.074844   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:52.076393   15617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:52.080736 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:52.080748 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:52.144427 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:52.144446 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:52.176847 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:52.176869 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:52.239307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:52.239325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:54.754340 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:54.764793 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:54.764857 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:54.794012 1311248 cri.go:89] found id: ""
	I1218 00:38:54.794027 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.794034 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:54.794039 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:54.794096 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:54.823133 1311248 cri.go:89] found id: ""
	I1218 00:38:54.823147 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.823155 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:54.823160 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:54.823216 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:54.847977 1311248 cri.go:89] found id: ""
	I1218 00:38:54.847991 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.847998 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:54.848003 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:54.848064 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:54.873449 1311248 cri.go:89] found id: ""
	I1218 00:38:54.873462 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.873469 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:54.873475 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:54.873532 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:54.897891 1311248 cri.go:89] found id: ""
	I1218 00:38:54.897905 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.897922 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:54.897928 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:54.897985 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:54.922432 1311248 cri.go:89] found id: ""
	I1218 00:38:54.922449 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.922456 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:54.922462 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:54.922520 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:54.947869 1311248 cri.go:89] found id: ""
	I1218 00:38:54.947884 1311248 logs.go:282] 0 containers: []
	W1218 00:38:54.947908 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:54.947916 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:54.947927 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:55.005409 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:55.005434 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:55.026491 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:55.026508 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:55.094641 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:55.084941   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086166   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.086709   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.088455   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:55.089216   15730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:55.094652 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:55.094663 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:38:55.159462 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:55.159481 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.695023 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:38:57.706079 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:38:57.706147 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:38:57.735083 1311248 cri.go:89] found id: ""
	I1218 00:38:57.735106 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.735114 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:38:57.735119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:38:57.735178 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:38:57.762228 1311248 cri.go:89] found id: ""
	I1218 00:38:57.762242 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.762249 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:38:57.762255 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:38:57.762313 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:38:57.787211 1311248 cri.go:89] found id: ""
	I1218 00:38:57.787226 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.787233 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:38:57.787238 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:38:57.787303 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:38:57.812671 1311248 cri.go:89] found id: ""
	I1218 00:38:57.812686 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.812693 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:38:57.812699 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:38:57.812762 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:38:57.840939 1311248 cri.go:89] found id: ""
	I1218 00:38:57.840953 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.840961 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:38:57.840966 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:38:57.841031 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:38:57.867148 1311248 cri.go:89] found id: ""
	I1218 00:38:57.867163 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.867170 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:38:57.867175 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:38:57.867232 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:38:57.891633 1311248 cri.go:89] found id: ""
	I1218 00:38:57.891648 1311248 logs.go:282] 0 containers: []
	W1218 00:38:57.891665 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:38:57.891674 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:38:57.891684 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:38:57.918896 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:38:57.918913 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:38:57.975605 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:38:57.975625 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:38:57.990660 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:38:57.990676 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:38:58.063038 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:38:58.053532   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.054524   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.056457   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.057354   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:38:58.058032   15848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:38:58.063048 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:38:58.063061 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.627359 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:00.638675 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:00.638768 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:00.669731 1311248 cri.go:89] found id: ""
	I1218 00:39:00.669745 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.669752 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:00.669757 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:00.669824 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:00.697124 1311248 cri.go:89] found id: ""
	I1218 00:39:00.697138 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.697145 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:00.697151 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:00.697211 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:00.722455 1311248 cri.go:89] found id: ""
	I1218 00:39:00.722469 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.722476 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:00.722486 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:00.722545 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:00.750996 1311248 cri.go:89] found id: ""
	I1218 00:39:00.751010 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.751018 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:00.751023 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:00.751091 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:00.780012 1311248 cri.go:89] found id: ""
	I1218 00:39:00.780026 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.780033 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:00.780038 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:00.780105 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:00.807119 1311248 cri.go:89] found id: ""
	I1218 00:39:00.807133 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.807140 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:00.807145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:00.807213 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:00.836658 1311248 cri.go:89] found id: ""
	I1218 00:39:00.836673 1311248 logs.go:282] 0 containers: []
	W1218 00:39:00.836681 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:00.836689 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:00.836699 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:00.851616 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:00.851633 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:00.919909 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:00.908348   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.909901   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.912294   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.913473   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:00.915083   15938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:00.919918 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:00.919929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:00.985802 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:00.985823 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:01.017691 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:01.017707 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.574413 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:03.585024 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:03.585088 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:03.615721 1311248 cri.go:89] found id: ""
	I1218 00:39:03.615735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.615742 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:03.615748 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:03.615811 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:03.641216 1311248 cri.go:89] found id: ""
	I1218 00:39:03.641230 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.641237 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:03.641243 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:03.641307 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:03.665604 1311248 cri.go:89] found id: ""
	I1218 00:39:03.665618 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.665625 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:03.665639 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:03.665717 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:03.690936 1311248 cri.go:89] found id: ""
	I1218 00:39:03.690951 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.690958 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:03.690970 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:03.691030 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:03.716763 1311248 cri.go:89] found id: ""
	I1218 00:39:03.716794 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.716806 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:03.716811 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:03.716898 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:03.742156 1311248 cri.go:89] found id: ""
	I1218 00:39:03.742170 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.742177 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:03.742183 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:03.742240 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:03.771205 1311248 cri.go:89] found id: ""
	I1218 00:39:03.771220 1311248 logs.go:282] 0 containers: []
	W1218 00:39:03.771227 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:03.771235 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:03.771245 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:03.834106 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:03.834127 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:03.863112 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:03.863129 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:03.919444 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:03.919465 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:03.934588 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:03.934607 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:04.000293 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:03.991688   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.992412   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994242   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.994901   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:03.996407   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.500788 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:06.511530 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:06.511596 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:06.536538 1311248 cri.go:89] found id: ""
	I1218 00:39:06.536554 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.536562 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:06.536568 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:06.536651 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:06.565199 1311248 cri.go:89] found id: ""
	I1218 00:39:06.565213 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.565219 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:06.565224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:06.565283 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:06.589614 1311248 cri.go:89] found id: ""
	I1218 00:39:06.589628 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.589636 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:06.589641 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:06.589700 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:06.614004 1311248 cri.go:89] found id: ""
	I1218 00:39:06.614019 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.614027 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:06.614032 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:06.614093 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:06.638819 1311248 cri.go:89] found id: ""
	I1218 00:39:06.638833 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.638841 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:06.638846 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:06.638908 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:06.666620 1311248 cri.go:89] found id: ""
	I1218 00:39:06.666634 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.666643 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:06.666648 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:06.666707 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:06.694192 1311248 cri.go:89] found id: ""
	I1218 00:39:06.694207 1311248 logs.go:282] 0 containers: []
	W1218 00:39:06.694216 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:06.694224 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:06.694235 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:06.709318 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:06.709336 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:06.773553 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:06.764393   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.765251   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767036   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.767646   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:06.769278   16150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:06.773564 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:06.773587 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:06.842917 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:06.842937 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:06.877280 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:06.877296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.433923 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:09.445181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:09.445248 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:09.470100 1311248 cri.go:89] found id: ""
	I1218 00:39:09.470115 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.470122 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:09.470127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:09.470184 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:09.499949 1311248 cri.go:89] found id: ""
	I1218 00:39:09.499964 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.499973 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:09.499978 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:09.500044 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:09.526313 1311248 cri.go:89] found id: ""
	I1218 00:39:09.526328 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.526335 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:09.526340 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:09.526404 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:09.551831 1311248 cri.go:89] found id: ""
	I1218 00:39:09.551844 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.551851 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:09.551857 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:09.551923 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:09.577535 1311248 cri.go:89] found id: ""
	I1218 00:39:09.577549 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.577557 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:09.577561 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:09.577622 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:09.602570 1311248 cri.go:89] found id: ""
	I1218 00:39:09.602584 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.602591 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:09.602597 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:09.602658 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:09.630715 1311248 cri.go:89] found id: ""
	I1218 00:39:09.630729 1311248 logs.go:282] 0 containers: []
	W1218 00:39:09.630736 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:09.630745 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:09.630755 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:09.686840 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:09.686859 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:09.703315 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:09.703331 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:09.770650 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:09.762484   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.762995   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.764603   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.765164   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:09.766687   16256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:09.770660 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:09.770670 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:09.832439 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:09.832457 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:12.361961 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:12.372127 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:12.372190 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:12.408061 1311248 cri.go:89] found id: ""
	I1218 00:39:12.408075 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.408082 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:12.408088 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:12.408145 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:12.434860 1311248 cri.go:89] found id: ""
	I1218 00:39:12.434874 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.434881 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:12.434886 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:12.434946 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:12.465255 1311248 cri.go:89] found id: ""
	I1218 00:39:12.465270 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.465278 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:12.465283 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:12.465341 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:12.494330 1311248 cri.go:89] found id: ""
	I1218 00:39:12.494344 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.494350 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:12.494356 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:12.494420 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:12.518885 1311248 cri.go:89] found id: ""
	I1218 00:39:12.518900 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.518907 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:12.518912 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:12.518973 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:12.543549 1311248 cri.go:89] found id: ""
	I1218 00:39:12.543564 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.543573 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:12.543578 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:12.543641 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:12.568469 1311248 cri.go:89] found id: ""
	I1218 00:39:12.568483 1311248 logs.go:282] 0 containers: []
	W1218 00:39:12.568500 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:12.568507 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:12.568519 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:12.624017 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:12.624039 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:12.639011 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:12.639028 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:12.703723 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:12.695186   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.695878   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697409   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.697942   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:12.699375   16360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:12.703734 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:12.703744 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:12.765331 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:12.765350 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.294913 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:15.308145 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:15.308210 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:15.340203 1311248 cri.go:89] found id: ""
	I1218 00:39:15.340218 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.340225 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:15.340230 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:15.340289 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:15.367732 1311248 cri.go:89] found id: ""
	I1218 00:39:15.367747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.367754 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:15.367760 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:15.367818 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:15.398027 1311248 cri.go:89] found id: ""
	I1218 00:39:15.398042 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.398049 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:15.398055 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:15.398115 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:15.430352 1311248 cri.go:89] found id: ""
	I1218 00:39:15.430366 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.430373 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:15.430379 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:15.430442 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:15.461268 1311248 cri.go:89] found id: ""
	I1218 00:39:15.461283 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.461291 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:15.461297 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:15.461361 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:15.487656 1311248 cri.go:89] found id: ""
	I1218 00:39:15.487671 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.487678 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:15.487684 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:15.487744 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:15.516835 1311248 cri.go:89] found id: ""
	I1218 00:39:15.516850 1311248 logs.go:282] 0 containers: []
	W1218 00:39:15.516858 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:15.516867 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:15.516877 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:15.584348 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:15.575875   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.576694   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578221   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.578844   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:15.580399   16462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:15.584357 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:15.584377 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:15.646829 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:15.646849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:15.675913 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:15.675929 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:15.731421 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:15.731441 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.246605 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:18.257277 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:18.257340 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:18.282497 1311248 cri.go:89] found id: ""
	I1218 00:39:18.282512 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.282519 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:18.282527 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:18.282594 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:18.317178 1311248 cri.go:89] found id: ""
	I1218 00:39:18.317193 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.317200 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:18.317205 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:18.317267 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:18.342018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.342032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.342039 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:18.342044 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:18.342098 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:18.366018 1311248 cri.go:89] found id: ""
	I1218 00:39:18.366032 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.366040 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:18.366045 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:18.366107 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:18.390880 1311248 cri.go:89] found id: ""
	I1218 00:39:18.390894 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.390902 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:18.390908 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:18.390968 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:18.427152 1311248 cri.go:89] found id: ""
	I1218 00:39:18.427167 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.427174 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:18.427181 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:18.427241 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:18.458481 1311248 cri.go:89] found id: ""
	I1218 00:39:18.458495 1311248 logs.go:282] 0 containers: []
	W1218 00:39:18.458502 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:18.458510 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:18.458521 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:18.486379 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:18.486397 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:18.546371 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:18.546396 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:18.561410 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:18.561431 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:18.625094 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:18.616770   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.617298   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.618947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.619597   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:18.621337   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:18.625105 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:18.625118 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.187071 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:21.197777 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:21.197842 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:21.228457 1311248 cri.go:89] found id: ""
	I1218 00:39:21.228472 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.228479 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:21.228485 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:21.228551 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:21.254227 1311248 cri.go:89] found id: ""
	I1218 00:39:21.254240 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.254258 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:21.254264 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:21.254321 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:21.283166 1311248 cri.go:89] found id: ""
	I1218 00:39:21.283180 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.283187 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:21.283193 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:21.283259 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:21.307940 1311248 cri.go:89] found id: ""
	I1218 00:39:21.307954 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.307962 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:21.307967 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:21.308022 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:21.333576 1311248 cri.go:89] found id: ""
	I1218 00:39:21.333590 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.333597 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:21.333602 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:21.333660 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:21.357404 1311248 cri.go:89] found id: ""
	I1218 00:39:21.357418 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.357425 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:21.357430 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:21.357488 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:21.386789 1311248 cri.go:89] found id: ""
	I1218 00:39:21.386803 1311248 logs.go:282] 0 containers: []
	W1218 00:39:21.386811 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:21.386819 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:21.386830 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:21.467813 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:21.459694   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.460331   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.461853   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.462343   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:21.463834   16668 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:21.467824 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:21.467834 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:21.529999 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:21.530019 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:21.561213 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:21.561228 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:21.619110 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:21.619128 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.133884 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:24.144224 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:24.144298 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:24.169895 1311248 cri.go:89] found id: ""
	I1218 00:39:24.169909 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.169916 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:24.169922 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:24.169981 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:24.196376 1311248 cri.go:89] found id: ""
	I1218 00:39:24.196390 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.196396 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:24.196401 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:24.196464 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:24.220959 1311248 cri.go:89] found id: ""
	I1218 00:39:24.220978 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.220986 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:24.220991 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:24.221051 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:24.246721 1311248 cri.go:89] found id: ""
	I1218 00:39:24.246735 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.246745 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:24.246751 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:24.246819 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:24.271380 1311248 cri.go:89] found id: ""
	I1218 00:39:24.271394 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.271401 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:24.271406 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:24.271466 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:24.298631 1311248 cri.go:89] found id: ""
	I1218 00:39:24.298645 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.298652 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:24.298657 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:24.298713 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:24.322933 1311248 cri.go:89] found id: ""
	I1218 00:39:24.322947 1311248 logs.go:282] 0 containers: []
	W1218 00:39:24.322965 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:24.322974 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:24.322984 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:24.378307 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:24.378325 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:24.395279 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:24.395296 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:24.478731 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:24.469907   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.470803   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.472526   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.473242   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:24.474699   16784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:24.478740 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:24.478750 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:24.539558 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:24.539578 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.069527 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:27.079511 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:27.079570 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:27.104730 1311248 cri.go:89] found id: ""
	I1218 00:39:27.104747 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.104754 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:27.104759 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:27.104826 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:27.134528 1311248 cri.go:89] found id: ""
	I1218 00:39:27.134543 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.134551 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:27.134556 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:27.134618 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:27.160290 1311248 cri.go:89] found id: ""
	I1218 00:39:27.160304 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.160311 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:27.160316 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:27.160374 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:27.187607 1311248 cri.go:89] found id: ""
	I1218 00:39:27.187621 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.187628 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:27.187634 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:27.187691 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:27.214602 1311248 cri.go:89] found id: ""
	I1218 00:39:27.214616 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.214623 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:27.214630 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:27.214690 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:27.239452 1311248 cri.go:89] found id: ""
	I1218 00:39:27.239466 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.239474 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:27.239479 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:27.239538 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:27.268209 1311248 cri.go:89] found id: ""
	I1218 00:39:27.268232 1311248 logs.go:282] 0 containers: []
	W1218 00:39:27.268240 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:27.268248 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:27.268259 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:27.283007 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:27.283033 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:27.351624 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:27.341545   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.342008   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.344497   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.345754   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:27.346472   16886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:27.351634 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:27.351644 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:27.414794 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:27.414814 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:27.449027 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:27.449042 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.008353 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:30.051512 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:30.051599 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:30.142207 1311248 cri.go:89] found id: ""
	I1218 00:39:30.142226 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.142234 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:30.142241 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:30.142317 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:30.175952 1311248 cri.go:89] found id: ""
	I1218 00:39:30.175967 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.175979 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:30.175985 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:30.176054 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:30.202613 1311248 cri.go:89] found id: ""
	I1218 00:39:30.202640 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.202649 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:30.202655 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:30.202718 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:30.229638 1311248 cri.go:89] found id: ""
	I1218 00:39:30.229653 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.229661 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:30.229666 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:30.229728 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:30.261192 1311248 cri.go:89] found id: ""
	I1218 00:39:30.261206 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.261214 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:30.261220 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:30.261285 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:30.288158 1311248 cri.go:89] found id: ""
	I1218 00:39:30.288173 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.288180 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:30.288189 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:30.288251 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:30.314418 1311248 cri.go:89] found id: ""
	I1218 00:39:30.314432 1311248 logs.go:282] 0 containers: []
	W1218 00:39:30.314441 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:30.314450 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:30.314462 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:30.369830 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:30.369849 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:30.385018 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:30.385037 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:30.467908 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:30.459340   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.460070   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.461836   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.462203   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:30.463912   16993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:30.467920 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:30.467930 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:30.529075 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:30.529095 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:33.059241 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:33.070119 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:39:33.070182 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:39:33.095716 1311248 cri.go:89] found id: ""
	I1218 00:39:33.095730 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.095738 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:39:33.095744 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:39:33.095804 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:39:33.121681 1311248 cri.go:89] found id: ""
	I1218 00:39:33.121697 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.121711 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:39:33.121717 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:39:33.121783 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:39:33.147424 1311248 cri.go:89] found id: ""
	I1218 00:39:33.147438 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.147445 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:39:33.147451 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:39:33.147514 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:39:33.173916 1311248 cri.go:89] found id: ""
	I1218 00:39:33.173931 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.173938 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:39:33.173943 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:39:33.174004 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:39:33.199675 1311248 cri.go:89] found id: ""
	I1218 00:39:33.199690 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.199697 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:39:33.199702 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:39:33.199761 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:39:33.229684 1311248 cri.go:89] found id: ""
	I1218 00:39:33.229698 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.229706 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:39:33.229711 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:39:33.229771 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:39:33.255931 1311248 cri.go:89] found id: ""
	I1218 00:39:33.255955 1311248 logs.go:282] 0 containers: []
	W1218 00:39:33.255963 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:39:33.255971 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:39:33.255981 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:39:33.312520 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:39:33.312538 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:39:33.327008 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:39:33.327024 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:39:33.392853 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:39:33.382090   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.382711   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.384450   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.385061   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:39:33.386695   17099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:39:33.392863 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:39:33.392873 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:39:33.462852 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:39:33.462872 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 00:39:35.991111 1311248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:39:36.001578 1311248 kubeadm.go:602] duration metric: took 4m4.636770246s to restartPrimaryControlPlane
	W1218 00:39:36.001631 1311248 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1218 00:39:36.001712 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:39:36.428039 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:39:36.441875 1311248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 00:39:36.449799 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:39:36.449855 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:39:36.457535 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:39:36.457543 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:39:36.457593 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:39:36.465339 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:39:36.465393 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:39:36.472406 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:39:36.480110 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:39:36.480163 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:39:36.487432 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.494964 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:39:36.495019 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:39:36.502375 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:39:36.509914 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:39:36.509976 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:39:36.517325 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:39:36.642706 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:39:36.643096 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:39:36.709498 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:43:38.241451 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 00:43:38.241477 1311248 kubeadm.go:319] 
	I1218 00:43:38.241546 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:43:38.245587 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.245639 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.245728 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.245779 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.245813 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.245856 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.245904 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.245947 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.246021 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.246074 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.246124 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.246169 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.246253 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.246316 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.246394 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.246489 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.246578 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.246661 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.249668 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.249761 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.249825 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.249900 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.249985 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.250056 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.250107 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.250167 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.250231 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.250306 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.250386 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.250429 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.250494 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:38.250547 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:38.250611 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:38.250669 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:38.250731 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:38.250784 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:38.250896 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:38.250969 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:38.255653 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:38.255752 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:38.255840 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:38.255905 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:38.256008 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:38.256128 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:38.256248 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:38.256329 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:38.256365 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:38.256499 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:38.256681 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:43:38.256752 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000096267s
	I1218 00:43:38.256755 1311248 kubeadm.go:319] 
	I1218 00:43:38.256814 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:43:38.256853 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:43:38.256963 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:43:38.256967 1311248 kubeadm.go:319] 
	I1218 00:43:38.257093 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:43:38.257126 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:43:38.257155 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:43:38.257212 1311248 kubeadm.go:319] 
	W1218 00:43:38.257278 1311248 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000096267s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 00:43:38.257393 1311248 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 00:43:38.672580 1311248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:43:38.686195 1311248 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 00:43:38.686247 1311248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 00:43:38.694107 1311248 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 00:43:38.694119 1311248 kubeadm.go:158] found existing configuration files:
	
	I1218 00:43:38.694170 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1218 00:43:38.702289 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 00:43:38.702343 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 00:43:38.710380 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1218 00:43:38.718160 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 00:43:38.718218 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 00:43:38.726244 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.734209 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 00:43:38.734268 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 00:43:38.741907 1311248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1218 00:43:38.749716 1311248 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 00:43:38.749773 1311248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 00:43:38.757471 1311248 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 00:43:38.797919 1311248 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 00:43:38.797966 1311248 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 00:43:38.877731 1311248 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 00:43:38.877795 1311248 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 00:43:38.877835 1311248 kubeadm.go:319] OS: Linux
	I1218 00:43:38.877879 1311248 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 00:43:38.877926 1311248 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 00:43:38.877972 1311248 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 00:43:38.878019 1311248 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 00:43:38.878065 1311248 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 00:43:38.878112 1311248 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 00:43:38.878155 1311248 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 00:43:38.878202 1311248 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 00:43:38.878247 1311248 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 00:43:38.941330 1311248 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 00:43:38.941446 1311248 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 00:43:38.941535 1311248 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 00:43:38.951935 1311248 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 00:43:38.957317 1311248 out.go:252]   - Generating certificates and keys ...
	I1218 00:43:38.957410 1311248 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 00:43:38.957474 1311248 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 00:43:38.957580 1311248 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 00:43:38.957646 1311248 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 00:43:38.957723 1311248 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 00:43:38.957784 1311248 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 00:43:38.957852 1311248 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 00:43:38.957913 1311248 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 00:43:38.957987 1311248 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 00:43:38.958059 1311248 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 00:43:38.958095 1311248 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 00:43:38.958151 1311248 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 00:43:39.202920 1311248 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 00:43:39.377892 1311248 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 00:43:39.964483 1311248 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 00:43:40.103558 1311248 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 00:43:40.457630 1311248 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 00:43:40.458383 1311248 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 00:43:40.462089 1311248 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 00:43:40.465489 1311248 out.go:252]   - Booting up control plane ...
	I1218 00:43:40.465583 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 00:43:40.465654 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 00:43:40.465716 1311248 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 00:43:40.486385 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 00:43:40.486497 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 00:43:40.494535 1311248 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 00:43:40.494848 1311248 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 00:43:40.495030 1311248 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 00:43:40.625355 1311248 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 00:43:40.625497 1311248 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 00:47:40.625149 1311248 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000298437s
	I1218 00:47:40.625174 1311248 kubeadm.go:319] 
	I1218 00:47:40.625227 1311248 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 00:47:40.625262 1311248 kubeadm.go:319] 	- The kubelet is not running
	I1218 00:47:40.625362 1311248 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 00:47:40.625367 1311248 kubeadm.go:319] 
	I1218 00:47:40.625481 1311248 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 00:47:40.625513 1311248 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 00:47:40.625550 1311248 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 00:47:40.625553 1311248 kubeadm.go:319] 
	I1218 00:47:40.629455 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 00:47:40.629954 1311248 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 00:47:40.630083 1311248 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 00:47:40.630316 1311248 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 00:47:40.630321 1311248 kubeadm.go:319] 
	I1218 00:47:40.630384 1311248 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 00:47:40.630455 1311248 kubeadm.go:403] duration metric: took 12m9.299018648s to StartCluster
	I1218 00:47:40.630487 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 00:47:40.630549 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 00:47:40.655474 1311248 cri.go:89] found id: ""
	I1218 00:47:40.655489 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.655497 1311248 logs.go:284] No container was found matching "kube-apiserver"
	I1218 00:47:40.655502 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 00:47:40.655558 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 00:47:40.681677 1311248 cri.go:89] found id: ""
	I1218 00:47:40.681692 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.681699 1311248 logs.go:284] No container was found matching "etcd"
	I1218 00:47:40.681705 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 00:47:40.681772 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 00:47:40.714293 1311248 cri.go:89] found id: ""
	I1218 00:47:40.714307 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.714314 1311248 logs.go:284] No container was found matching "coredns"
	I1218 00:47:40.714319 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 00:47:40.714379 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 00:47:40.739065 1311248 cri.go:89] found id: ""
	I1218 00:47:40.739089 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.739097 1311248 logs.go:284] No container was found matching "kube-scheduler"
	I1218 00:47:40.739102 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 00:47:40.739168 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 00:47:40.763653 1311248 cri.go:89] found id: ""
	I1218 00:47:40.763666 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.763673 1311248 logs.go:284] No container was found matching "kube-proxy"
	I1218 00:47:40.763678 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 00:47:40.763737 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 00:47:40.789038 1311248 cri.go:89] found id: ""
	I1218 00:47:40.789052 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.789059 1311248 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 00:47:40.789065 1311248 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 00:47:40.789124 1311248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 00:47:40.817866 1311248 cri.go:89] found id: ""
	I1218 00:47:40.817880 1311248 logs.go:282] 0 containers: []
	W1218 00:47:40.817887 1311248 logs.go:284] No container was found matching "kindnet"
	I1218 00:47:40.817895 1311248 logs.go:123] Gathering logs for kubelet ...
	I1218 00:47:40.817905 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 00:47:40.877071 1311248 logs.go:123] Gathering logs for dmesg ...
	I1218 00:47:40.877090 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 00:47:40.891818 1311248 logs.go:123] Gathering logs for describe nodes ...
	I1218 00:47:40.891835 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 00:47:40.956585 1311248 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 00:47:40.948133   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.948822   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.950539   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.951213   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:40.952828   20878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 00:47:40.956595 1311248 logs.go:123] Gathering logs for containerd ...
	I1218 00:47:40.956605 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 00:47:41.023372 1311248 logs.go:123] Gathering logs for container status ...
	I1218 00:47:41.023390 1311248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 00:47:41.051126 1311248 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 00:47:41.051157 1311248 out.go:285] * 
	W1218 00:47:41.051213 1311248 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.051229 1311248 out.go:285] * 
	W1218 00:47:41.053388 1311248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 00:47:41.058223 1311248 out.go:203] 
	W1218 00:47:41.061890 1311248 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000298437s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 00:47:41.061936 1311248 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 00:47:41.061956 1311248 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 00:47:41.065091 1311248 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724301153Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724312311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724321337Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724338510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724355125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724387017Z" level=info msg="Connect containerd service"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.724787739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.725358196Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744687707Z" level=info msg="Start subscribing containerd event"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744774532Z" level=info msg="Start recovering state"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.744732367Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.745188078Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.785773770Z" level=info msg="Start event monitor"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.785958718Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786026286Z" level=info msg="Start streaming server"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786098128Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786157901Z" level=info msg="runtime interface starting up..."
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786221604Z" level=info msg="starting plugins..."
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.786283461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 00:35:29 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 00:35:29 functional-232602 containerd[9652]: time="2025-12-18T00:35:29.788365819Z" level=info msg="containerd successfully booted in 0.084734s"
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.467212780Z" level=info msg="No images store for sha256:88871186651aea0ed5608315e891b3426e70e8f85f75cbd135c4079fb9a8af37"
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.469723945Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-232602\""
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.479684357Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 00:47:50 functional-232602 containerd[9652]: time="2025-12-18T00:47:50.480115132Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-232602\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 00:47:51.560553   21676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:51.561482   21676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:51.563283   21676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:51.563799   21676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1218 00:47:51.565557   21676 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:47:51 up  7:30,  0 user,  load average: 0.91, 0.40, 0.49
	Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 00:47:48 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:48 functional-232602 kubelet[21408]: E1218 00:47:48.705338   21408 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:48 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:48 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:49 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 18 00:47:49 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:49 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:49 functional-232602 kubelet[21457]: E1218 00:47:49.466750   21457 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:49 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:49 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:50 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 18 00:47:50 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:50 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:50 functional-232602 kubelet[21525]: E1218 00:47:50.249951   21525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:50 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:50 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:50 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 333.
	Dec 18 00:47:50 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:50 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:50 functional-232602 kubelet[21570]: E1218 00:47:50.960204   21570 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 00:47:50 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 00:47:50 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 00:47:51 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 334.
	Dec 18 00:47:51 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 00:47:51 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 2 (544.684351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (3.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-232602 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-232602 create deployment hello-node --image kicbase/echo-server: exit status 1 (91.362737ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-232602 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 service list: exit status 103 (312.180249ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232602"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-232602 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-232602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232602\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 service list -o json: exit status 103 (340.881372ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232602"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-232602 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 service --namespace=default --https --url hello-node: exit status 103 (412.654018ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232602"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-232602 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 service hello-node --url --format={{.IP}}: exit status 103 (352.043453ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232602"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-232602 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-232602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232602\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1218 00:47:56.873297 1326235 out.go:360] Setting OutFile to fd 1 ...
I1218 00:47:56.873633 1326235 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:47:56.873644 1326235 out.go:374] Setting ErrFile to fd 2...
I1218 00:47:56.873650 1326235 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:47:56.873912 1326235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:47:56.874219 1326235 mustload.go:66] Loading cluster: functional-232602
I1218 00:47:56.874651 1326235 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:47:56.875112 1326235 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:47:56.924497 1326235 host.go:66] Checking if "functional-232602" exists ...
I1218 00:47:56.924912 1326235 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1218 00:47:57.072228 1326235 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:47:57.05946068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1218 00:47:57.072354 1326235 api_server.go:166] Checking apiserver status ...
I1218 00:47:57.072414 1326235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1218 00:47:57.072500 1326235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:47:57.128036 1326235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
W1218 00:47:57.255782 1326235 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1218 00:47:57.259123 1326235 out.go:179] * The control-plane node functional-232602 apiserver is not running: (state=Stopped)
I1218 00:47:57.261960 1326235 out.go:179]   To start a cluster, run: "minikube start -p functional-232602"

                                                
                                                
stdout: * The control-plane node functional-232602 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-232602"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1326234: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 service hello-node --url: exit status 103 (487.673147ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-232602 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-232602"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-232602 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-232602 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-232602"
functional_test.go:1579: failed to parse "* The control-plane node functional-232602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232602\"": parse "* The control-plane node functional-232602 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-232602\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-232602 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-232602 apply -f testdata/testsvc.yaml: exit status 1 (147.893325ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-232602 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (92.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.106.246.222": Temporary Error: Get "http://10.106.246.222": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-232602 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-232602 get svc nginx-svc: exit status 1 (66.50214ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-232602 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (92.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766018977267713907" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766018977267713907" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766018977267713907" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001/test-1766018977267713907
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.915618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1218 00:49:37.636892 1261148 retry.go:31] will retry after 634.001178ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 00:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 00:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 00:49 test-1766018977267713907
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh cat /mount-9p/test-1766018977267713907
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-232602 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-232602 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (60.559357ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-232602 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (269.687331ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=38029)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 18 00:49 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 18 00:49 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 18 00:49 test-1766018977267713907
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-232602 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:38029
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001:/mount-9p --alsologtostderr -v=1] stderr:
I1218 00:49:37.331546 1328585 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:37.331676 1328585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:37.331688 1328585 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:37.331693 1328585 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:37.332053 1328585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:37.332335 1328585 mustload.go:66] Loading cluster: functional-232602
I1218 00:49:37.333004 1328585 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:37.333523 1328585 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:37.351886 1328585 host.go:66] Checking if "functional-232602" exists ...
I1218 00:49:37.352338 1328585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1218 00:49:37.457987 1328585 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:49:37.444196026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1218 00:49:37.458142 1328585 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1218 00:49:37.494812 1328585 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001 into VM as /mount-9p ...
I1218 00:49:37.497848 1328585 out.go:179]   - Mount type:   9p
I1218 00:49:37.500727 1328585 out.go:179]   - User ID:      docker
I1218 00:49:37.503584 1328585 out.go:179]   - Group ID:     docker
I1218 00:49:37.506578 1328585 out.go:179]   - Version:      9p2000.L
I1218 00:49:37.509557 1328585 out.go:179]   - Message Size: 262144
I1218 00:49:37.512751 1328585 out.go:179]   - Options:      map[]
I1218 00:49:37.515665 1328585 out.go:179]   - Bind Address: 192.168.49.1:38029
I1218 00:49:37.518580 1328585 out.go:179] * Userspace file server: 
I1218 00:49:37.518932 1328585 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1218 00:49:37.522144 1328585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:37.541750 1328585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:49:37.651740 1328585 mount.go:180] unmount for /mount-9p ran successfully
I1218 00:49:37.651776 1328585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1218 00:49:37.660115 1328585 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=38029,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1218 00:49:37.670642 1328585 main.go:127] stdlog: ufs.go:141 connected
I1218 00:49:37.670796 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tversion tag 65535 msize 262144 version '9P2000.L'
I1218 00:49:37.670834 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rversion tag 65535 msize 262144 version '9P2000'
I1218 00:49:37.671069 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1218 00:49:37.671130 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rattach tag 0 aqid (c9d629 2eef0def 'd')
I1218 00:49:37.671412 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 0
I1218 00:49:37.671481 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d629 2eef0def 'd') m d775 at 0 mt 1766018977 l 4096 t 0 d 0 ext )
I1218 00:49:37.673209 1328585 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/.mount-process: {Name:mk3fa58b818fe039d0e453be24d8ea0047460ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:49:37.673399 1328585 mount.go:105] mount successful: ""
I1218 00:49:37.676897 1328585 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1953793689/001 to /mount-9p
I1218 00:49:37.679773 1328585 out.go:203] 
I1218 00:49:37.682615 1328585 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1218 00:49:38.821618 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 0
I1218 00:49:38.821753 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d629 2eef0def 'd') m d775 at 0 mt 1766018977 l 4096 t 0 d 0 ext )
I1218 00:49:38.822139 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 1 
I1218 00:49:38.822179 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 
I1218 00:49:38.822354 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Topen tag 0 fid 1 mode 0
I1218 00:49:38.822439 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Ropen tag 0 qid (c9d629 2eef0def 'd') iounit 0
I1218 00:49:38.822582 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 0
I1218 00:49:38.822640 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d629 2eef0def 'd') m d775 at 0 mt 1766018977 l 4096 t 0 d 0 ext )
I1218 00:49:38.822839 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 0 count 262120
I1218 00:49:38.822984 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 258
I1218 00:49:38.823137 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 258 count 261862
I1218 00:49:38.823170 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:38.823312 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 258 count 262120
I1218 00:49:38.823351 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:38.823509 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1218 00:49:38.823564 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62a 2eef0def '') 
I1218 00:49:38.823712 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:38.823764 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d62a 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:38.823904 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:38.823985 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d62a 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:38.824136 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:38.824179 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:38.824324 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 2 0:'test-1766018977267713907' 
I1218 00:49:38.824366 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62c 2eef0def '') 
I1218 00:49:38.824499 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:38.824533 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('test-1766018977267713907' 'jenkins' 'jenkins' '' q (c9d62c 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:38.824715 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:38.824765 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('test-1766018977267713907' 'jenkins' 'jenkins' '' q (c9d62c 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:38.824898 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:38.824928 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:38.825130 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1218 00:49:38.825192 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62b 2eef0def '') 
I1218 00:49:38.825330 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:38.825529 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d62b 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:38.825689 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:38.825736 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d62b 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:38.825902 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:38.825941 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:38.826111 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 258 count 262120
I1218 00:49:38.826161 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:38.826319 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 1
I1218 00:49:38.826357 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.124747 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 1 0:'test-1766018977267713907' 
I1218 00:49:39.124846 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62c 2eef0def '') 
I1218 00:49:39.125017 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 1
I1218 00:49:39.125064 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('test-1766018977267713907' 'jenkins' 'jenkins' '' q (c9d62c 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.125212 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 1 newfid 2 
I1218 00:49:39.125253 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 
I1218 00:49:39.125370 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Topen tag 0 fid 2 mode 0
I1218 00:49:39.125418 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Ropen tag 0 qid (c9d62c 2eef0def '') iounit 0
I1218 00:49:39.125550 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 1
I1218 00:49:39.125583 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('test-1766018977267713907' 'jenkins' 'jenkins' '' q (c9d62c 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.125716 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 2 offset 0 count 262120
I1218 00:49:39.125770 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 24
I1218 00:49:39.125903 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 2 offset 24 count 262120
I1218 00:49:39.125931 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:39.126073 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 2 offset 24 count 262120
I1218 00:49:39.126110 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:39.126253 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:39.126297 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.126467 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 1
I1218 00:49:39.126491 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.459801 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 0
I1218 00:49:39.459877 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d629 2eef0def 'd') m d775 at 0 mt 1766018977 l 4096 t 0 d 0 ext )
I1218 00:49:39.460255 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 1 
I1218 00:49:39.460328 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 
I1218 00:49:39.460477 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Topen tag 0 fid 1 mode 0
I1218 00:49:39.460531 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Ropen tag 0 qid (c9d629 2eef0def 'd') iounit 0
I1218 00:49:39.460688 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 0
I1218 00:49:39.460723 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d629 2eef0def 'd') m d775 at 0 mt 1766018977 l 4096 t 0 d 0 ext )
I1218 00:49:39.460881 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 0 count 262120
I1218 00:49:39.460980 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 258
I1218 00:49:39.461133 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 258 count 261862
I1218 00:49:39.461165 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:39.461404 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 258 count 262120
I1218 00:49:39.461433 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:39.461584 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1218 00:49:39.461619 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62a 2eef0def '') 
I1218 00:49:39.461733 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:39.461765 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d62a 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.461903 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:39.461937 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d62a 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.462054 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:39.462077 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.462223 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 2 0:'test-1766018977267713907' 
I1218 00:49:39.462277 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62c 2eef0def '') 
I1218 00:49:39.462409 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:39.462443 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('test-1766018977267713907' 'jenkins' 'jenkins' '' q (c9d62c 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.462572 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:39.462602 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('test-1766018977267713907' 'jenkins' 'jenkins' '' q (c9d62c 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.462722 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:39.462743 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.462888 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1218 00:49:39.462920 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rwalk tag 0 (c9d62b 2eef0def '') 
I1218 00:49:39.463075 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:39.463109 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d62b 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.463257 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tstat tag 0 fid 2
I1218 00:49:39.463292 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d62b 2eef0def '') m 644 at 0 mt 1766018977 l 24 t 0 d 0 ext )
I1218 00:49:39.463408 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 2
I1218 00:49:39.463429 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.463563 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tread tag 0 fid 1 offset 258 count 262120
I1218 00:49:39.463591 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rread tag 0 count 0
I1218 00:49:39.463718 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 1
I1218 00:49:39.463749 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.465365 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1218 00:49:39.465470 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rerror tag 0 ename 'file not found' ecode 0
I1218 00:49:39.738362 1328585 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44678 Tclunk tag 0 fid 0
I1218 00:49:39.738419 1328585 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44678 Rclunk tag 0
I1218 00:49:39.739584 1328585 main.go:127] stdlog: ufs.go:147 disconnected
I1218 00:49:39.762692 1328585 out.go:179] * Unmounting /mount-9p ...
I1218 00:49:39.765767 1328585 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1218 00:49:39.773828 1328585 mount.go:180] unmount for /mount-9p ran successfully
I1218 00:49:39.773945 1328585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/.mount-process: {Name:mk3fa58b818fe039d0e453be24d8ea0047460ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:49:39.777167 1328585 out.go:203] 
W1218 00:49:39.780171 1328585 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1218 00:49:39.782966 1328585 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (802.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-675544 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-675544 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.03801687s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-675544
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-675544: (1.334576427s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-675544 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-675544 status --format={{.Host}}: exit status 7 (71.167885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-675544 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1218 01:17:57.378747 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-675544 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m35.23170391s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-675544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-675544" primary control-plane node in "kubernetes-upgrade-675544" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:17:50.904438 1458839 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:17:50.904591 1458839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:17:50.904602 1458839 out.go:374] Setting ErrFile to fd 2...
	I1218 01:17:50.904608 1458839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:17:50.904898 1458839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:17:50.905314 1458839 out.go:368] Setting JSON to false
	I1218 01:17:50.906263 1458839 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28817,"bootTime":1765991854,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:17:50.906344 1458839 start.go:143] virtualization:  
	I1218 01:17:50.917866 1458839 out.go:179] * [kubernetes-upgrade-675544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:17:50.921314 1458839 notify.go:221] Checking for updates...
	I1218 01:17:50.924752 1458839 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:17:50.927636 1458839 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:17:50.930633 1458839 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:17:50.933421 1458839 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:17:50.936229 1458839 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:17:50.939041 1458839 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:17:50.942322 1458839 config.go:182] Loaded profile config "kubernetes-upgrade-675544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1218 01:17:50.942934 1458839 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:17:50.971302 1458839 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:17:50.971432 1458839 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:17:51.051359 1458839 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:17:51.041888072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:17:51.051470 1458839 docker.go:319] overlay module found
	I1218 01:17:51.054641 1458839 out.go:179] * Using the docker driver based on existing profile
	I1218 01:17:51.057529 1458839 start.go:309] selected driver: docker
	I1218 01:17:51.057552 1458839 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-675544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-675544 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:17:51.057657 1458839 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:17:51.058389 1458839 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:17:51.113381 1458839 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:17:51.103312755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:17:51.113719 1458839 cni.go:84] Creating CNI manager for ""
	I1218 01:17:51.113783 1458839 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:17:51.113835 1458839 start.go:353] cluster config:
	{Name:kubernetes-upgrade-675544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-675544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:17:51.117020 1458839 out.go:179] * Starting "kubernetes-upgrade-675544" primary control-plane node in "kubernetes-upgrade-675544" cluster
	I1218 01:17:51.119749 1458839 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:17:51.122718 1458839 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:17:51.125791 1458839 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:17:51.125847 1458839 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:17:51.125858 1458839 cache.go:65] Caching tarball of preloaded images
	I1218 01:17:51.125897 1458839 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:17:51.125946 1458839 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:17:51.125957 1458839 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:17:51.126065 1458839 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/config.json ...
	I1218 01:17:51.147392 1458839 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:17:51.147421 1458839 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:17:51.147438 1458839 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:17:51.147476 1458839 start.go:360] acquireMachinesLock for kubernetes-upgrade-675544: {Name:mk7267d9a29ebfb76bb2b69d0846d3fc9d466c90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:17:51.147546 1458839 start.go:364] duration metric: took 44.232µs to acquireMachinesLock for "kubernetes-upgrade-675544"
	I1218 01:17:51.147573 1458839 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:17:51.147579 1458839 fix.go:54] fixHost starting: 
	I1218 01:17:51.147886 1458839 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-675544 --format={{.State.Status}}
	I1218 01:17:51.174606 1458839 fix.go:112] recreateIfNeeded on kubernetes-upgrade-675544: state=Stopped err=<nil>
	W1218 01:17:51.174637 1458839 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:17:51.177751 1458839 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-675544" ...
	I1218 01:17:51.177839 1458839 cli_runner.go:164] Run: docker start kubernetes-upgrade-675544
	I1218 01:17:51.439225 1458839 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-675544 --format={{.State.Status}}
	I1218 01:17:51.460759 1458839 kic.go:430] container "kubernetes-upgrade-675544" state is running.
	I1218 01:17:51.461253 1458839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-675544
	I1218 01:17:51.489811 1458839 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/config.json ...
	I1218 01:17:51.490234 1458839 machine.go:94] provisionDockerMachine start ...
	I1218 01:17:51.490353 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:51.515755 1458839 main.go:143] libmachine: Using SSH client type: native
	I1218 01:17:51.516092 1458839 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34127 <nil> <nil>}
	I1218 01:17:51.516101 1458839 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:17:51.516724 1458839 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39984->127.0.0.1:34127: read: connection reset by peer
	I1218 01:17:54.676944 1458839 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-675544
	
	I1218 01:17:54.677029 1458839 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-675544"
	I1218 01:17:54.677139 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:54.694006 1458839 main.go:143] libmachine: Using SSH client type: native
	I1218 01:17:54.694338 1458839 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34127 <nil> <nil>}
	I1218 01:17:54.694355 1458839 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-675544 && echo "kubernetes-upgrade-675544" | sudo tee /etc/hostname
	I1218 01:17:54.858997 1458839 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-675544
	
	I1218 01:17:54.859077 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:54.877059 1458839 main.go:143] libmachine: Using SSH client type: native
	I1218 01:17:54.877384 1458839 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34127 <nil> <nil>}
	I1218 01:17:54.877409 1458839 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-675544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-675544/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-675544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:17:55.033420 1458839 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:17:55.033458 1458839 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:17:55.033501 1458839 ubuntu.go:190] setting up certificates
	I1218 01:17:55.033515 1458839 provision.go:84] configureAuth start
	I1218 01:17:55.033590 1458839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-675544
	I1218 01:17:55.052383 1458839 provision.go:143] copyHostCerts
	I1218 01:17:55.052470 1458839 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:17:55.052480 1458839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:17:55.052571 1458839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:17:55.052721 1458839 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:17:55.052734 1458839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:17:55.052766 1458839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:17:55.052834 1458839 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:17:55.052844 1458839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:17:55.052871 1458839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:17:55.052936 1458839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-675544 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-675544 localhost minikube]
	I1218 01:17:55.546063 1458839 provision.go:177] copyRemoteCerts
	I1218 01:17:55.546142 1458839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:17:55.546201 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:55.566004 1458839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34127 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kubernetes-upgrade-675544/id_rsa Username:docker}
	I1218 01:17:55.677421 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:17:55.698296 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1218 01:17:55.717870 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:17:55.738234 1458839 provision.go:87] duration metric: took 704.702983ms to configureAuth
	I1218 01:17:55.738258 1458839 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:17:55.738457 1458839 config.go:182] Loaded profile config "kubernetes-upgrade-675544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:17:55.738465 1458839 machine.go:97] duration metric: took 4.248216708s to provisionDockerMachine
	I1218 01:17:55.738472 1458839 start.go:293] postStartSetup for "kubernetes-upgrade-675544" (driver="docker")
	I1218 01:17:55.738484 1458839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:17:55.738541 1458839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:17:55.738580 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:55.756270 1458839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34127 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kubernetes-upgrade-675544/id_rsa Username:docker}
	I1218 01:17:55.865684 1458839 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:17:55.869007 1458839 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:17:55.869037 1458839 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:17:55.869053 1458839 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:17:55.869108 1458839 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:17:55.869224 1458839 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:17:55.869336 1458839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:17:55.877316 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:17:55.897348 1458839 start.go:296] duration metric: took 158.860036ms for postStartSetup
	I1218 01:17:55.897454 1458839 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:17:55.897499 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:55.923680 1458839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34127 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kubernetes-upgrade-675544/id_rsa Username:docker}
	I1218 01:17:56.030988 1458839 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:17:56.036066 1458839 fix.go:56] duration metric: took 4.888479332s for fixHost
	I1218 01:17:56.036105 1458839 start.go:83] releasing machines lock for "kubernetes-upgrade-675544", held for 4.888531704s
	I1218 01:17:56.036181 1458839 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-675544
	I1218 01:17:56.053644 1458839 ssh_runner.go:195] Run: cat /version.json
	I1218 01:17:56.053708 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:56.053765 1458839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:17:56.053832 1458839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-675544
	I1218 01:17:56.072519 1458839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34127 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kubernetes-upgrade-675544/id_rsa Username:docker}
	I1218 01:17:56.097618 1458839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34127 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kubernetes-upgrade-675544/id_rsa Username:docker}
	I1218 01:17:56.303897 1458839 ssh_runner.go:195] Run: systemctl --version
	I1218 01:17:56.311187 1458839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:17:56.316720 1458839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:17:56.316839 1458839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:17:56.327407 1458839 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:17:56.327436 1458839 start.go:496] detecting cgroup driver to use...
	I1218 01:17:56.327468 1458839 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:17:56.327546 1458839 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:17:56.345484 1458839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:17:56.359245 1458839 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:17:56.359313 1458839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:17:56.375508 1458839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:17:56.389115 1458839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:17:56.508950 1458839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:17:56.629144 1458839 docker.go:234] disabling docker service ...
	I1218 01:17:56.629293 1458839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:17:56.658321 1458839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:17:56.673407 1458839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:17:56.830020 1458839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:17:56.961967 1458839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:17:56.976204 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:17:56.990816 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:17:57.005254 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:17:57.016024 1458839 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:17:57.016098 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:17:57.025531 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:17:57.034980 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:17:57.044257 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:17:57.053468 1458839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:17:57.062602 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:17:57.071232 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:17:57.080157 1458839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:17:57.089498 1458839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:17:57.096982 1458839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:17:57.104604 1458839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:17:57.218661 1458839 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:17:57.372332 1458839 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:17:57.372445 1458839 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:17:57.376365 1458839 start.go:564] Will wait 60s for crictl version
	I1218 01:17:57.376450 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:17:57.380418 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:17:57.404262 1458839 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:17:57.404359 1458839 ssh_runner.go:195] Run: containerd --version
	I1218 01:17:57.427009 1458839 ssh_runner.go:195] Run: containerd --version
	I1218 01:17:57.454890 1458839 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:17:57.458033 1458839 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-675544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:17:57.478957 1458839 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:17:57.483628 1458839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:17:57.494377 1458839 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-675544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-675544 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:17:57.494518 1458839 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:17:57.494602 1458839 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:17:57.524773 1458839 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1218 01:17:57.524848 1458839 ssh_runner.go:195] Run: which lz4
	I1218 01:17:57.528820 1458839 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1218 01:17:57.533593 1458839 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 01:17:57.533627 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305659384 bytes)
	I1218 01:18:00.793646 1458839 containerd.go:563] duration metric: took 3.264869103s to copy over tarball
	I1218 01:18:00.793747 1458839 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 01:18:02.973669 1458839 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.179891194s)
	I1218 01:18:02.973764 1458839 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1218 01:18:02.973845 1458839 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:18:03.055102 1458839 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1218 01:18:03.055127 1458839 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1218 01:18:03.055196 1458839 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:18:03.055436 1458839 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:03.055583 1458839 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:03.055682 1458839 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:03.055771 1458839 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:03.055868 1458839 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1218 01:18:03.055970 1458839 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:03.056058 1458839 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:03.057062 1458839 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:03.057724 1458839 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:03.057770 1458839 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:03.057815 1458839 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1218 01:18:03.057854 1458839 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:18:03.057901 1458839 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:03.057937 1458839 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:03.057976 1458839 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:03.425193 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1218 01:18:03.425324 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1218 01:18:03.433242 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1218 01:18:03.433313 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:03.507046 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1218 01:18:03.507169 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:03.520752 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1218 01:18:03.520827 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:03.535991 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1218 01:18:03.536083 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:03.536371 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1218 01:18:03.536419 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:03.545107 1458839 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1218 01:18:03.545182 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:03.646708 1458839 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1218 01:18:03.646752 1458839 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1218 01:18:03.646802 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.649474 1458839 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1218 01:18:03.649512 1458839 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:03.649560 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.650700 1458839 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1218 01:18:03.650741 1458839 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:03.650778 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.689881 1458839 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1218 01:18:03.689969 1458839 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:03.690051 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.692394 1458839 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1218 01:18:03.692440 1458839 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:03.692485 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.692547 1458839 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1218 01:18:03.692572 1458839 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:03.692594 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.692652 1458839 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1218 01:18:03.692672 1458839 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:03.692692 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:03.692746 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1218 01:18:03.692793 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:03.692845 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:03.699570 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:03.699671 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:03.842333 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:03.842439 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:03.842505 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:03.842563 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1218 01:18:03.842629 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:03.842698 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:03.842754 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:04.101107 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:18:04.101289 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:18:04.101400 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:04.101536 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:04.101644 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1218 01:18:04.101761 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:18:04.101859 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1218 01:18:04.311240 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1218 01:18:04.311338 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1218 01:18:04.311398 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1218 01:18:04.311482 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:18:04.311583 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:18:04.311637 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1218 01:18:04.311706 1458839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1218 01:18:04.311758 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	W1218 01:18:04.318540 1458839 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1218 01:18:04.318680 1458839 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1218 01:18:04.318755 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:18:04.430027 1458839 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1218 01:18:04.430066 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1218 01:18:04.430168 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1218 01:18:04.430221 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1218 01:18:04.430322 1458839 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1218 01:18:04.430352 1458839 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:18:04.430394 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:04.439439 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:18:04.487695 1458839 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1218 01:18:04.487776 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1218 01:18:04.800513 1458839 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1218 01:18:04.800653 1458839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1218 01:18:04.872312 1458839 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1218 01:18:04.872384 1458839 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1218 01:18:04.872416 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1218 01:18:04.977236 1458839 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1218 01:18:04.977378 1458839 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1218 01:18:05.499269 1458839 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1218 01:18:05.499379 1458839 cache_images.go:94] duration metric: took 2.444238417s to LoadCachedImages
	W1218 01:18:05.499475 1458839 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0: no such file or directory
	I1218 01:18:05.499516 1458839 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:18:05.499630 1458839 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-675544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-675544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:18:05.499719 1458839 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:18:05.540459 1458839 cni.go:84] Creating CNI manager for ""
	I1218 01:18:05.540487 1458839 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:18:05.540506 1458839 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:18:05.540528 1458839 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-675544 NodeName:kubernetes-upgrade-675544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:18:05.540651 1458839 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-675544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:18:05.540721 1458839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:18:05.553645 1458839 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:18:05.553718 1458839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:18:05.561624 1458839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1218 01:18:05.574974 1458839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:18:05.589171 1458839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2243 bytes)
	I1218 01:18:05.603040 1458839 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:18:05.607474 1458839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:18:05.618491 1458839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:18:05.827766 1458839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:18:05.856873 1458839 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544 for IP: 192.168.85.2
	I1218 01:18:05.856901 1458839 certs.go:195] generating shared ca certs ...
	I1218 01:18:05.856918 1458839 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:18:05.857059 1458839 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:18:05.857111 1458839 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:18:05.857123 1458839 certs.go:257] generating profile certs ...
	I1218 01:18:05.857222 1458839 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.key
	I1218 01:18:05.857295 1458839 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/apiserver.key.583bdb82
	I1218 01:18:05.857338 1458839 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/proxy-client.key
	I1218 01:18:05.857454 1458839 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:18:05.857491 1458839 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:18:05.857507 1458839 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:18:05.857535 1458839 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:18:05.857560 1458839 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:18:05.857584 1458839 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:18:05.857633 1458839 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:18:05.858199 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:18:05.899598 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:18:05.925071 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:18:05.953397 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:18:05.976849 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1218 01:18:06.018480 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 01:18:06.047315 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:18:06.078042 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:18:06.111074 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:18:06.131235 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:18:06.150822 1458839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:18:06.169745 1458839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:18:06.184694 1458839 ssh_runner.go:195] Run: openssl version
	I1218 01:18:06.195085 1458839 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:18:06.203570 1458839 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:18:06.213890 1458839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:18:06.219130 1458839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:18:06.219203 1458839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:18:06.264293 1458839 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:18:06.271884 1458839 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:18:06.279497 1458839 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:18:06.287654 1458839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:18:06.291995 1458839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:18:06.292077 1458839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:18:06.334412 1458839 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:18:06.342164 1458839 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:18:06.350792 1458839 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:18:06.359180 1458839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:18:06.363679 1458839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:18:06.363747 1458839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:18:06.421062 1458839 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:18:06.428826 1458839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:18:06.433375 1458839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:18:06.485180 1458839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:18:06.527374 1458839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:18:06.570281 1458839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:18:06.614932 1458839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:18:06.675794 1458839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:18:06.755711 1458839 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-675544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-675544 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:18:06.755809 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:18:06.755894 1458839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:18:06.784308 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:18:06.784349 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:18:06.784354 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:18:06.784358 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:18:06.784361 1458839 cri.go:89] found id: ""
	I1218 01:18:06.784424 1458839 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1218 01:18:06.809660 1458839 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-18T01:18:06Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1218 01:18:06.809770 1458839 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:18:06.817671 1458839 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:18:06.817742 1458839 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:18:06.817814 1458839 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:18:06.825155 1458839 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:18:06.825746 1458839 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-675544" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:18:06.825970 1458839 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-675544" cluster setting kubeconfig missing "kubernetes-upgrade-675544" context setting]
	I1218 01:18:06.826389 1458839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:18:06.826988 1458839 kapi.go:59] client config for kubernetes-upgrade-675544: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.key", CAFile:"/home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb51f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 01:18:06.827512 1458839 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1218 01:18:06.827532 1458839 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1218 01:18:06.827539 1458839 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1218 01:18:06.827544 1458839 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1218 01:18:06.827555 1458839 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1218 01:18:06.827813 1458839 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:18:06.836445 1458839 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-18 01:17:28.015450262 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-18 01:18:05.598552915 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-675544"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1218 01:18:06.836466 1458839 kubeadm.go:1161] stopping kube-system containers ...
	I1218 01:18:06.836476 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1218 01:18:06.836532 1458839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:18:06.861452 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:18:06.861476 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:18:06.861481 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:18:06.861484 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:18:06.861487 1458839 cri.go:89] found id: ""
	I1218 01:18:06.861493 1458839 cri.go:252] Stopping containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:18:06.861546 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:18:06.865216 1458839 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa
	I1218 01:18:06.902063 1458839 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 01:18:06.919244 1458839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:18:06.928037 1458839 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 18 01:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 18 01:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 18 01:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 18 01:17 /etc/kubernetes/scheduler.conf
	
	I1218 01:18:06.928125 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:18:06.936493 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:18:06.944799 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:18:06.953948 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:18:06.954011 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:18:06.962220 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:18:06.970441 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:18:06.970503 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:18:06.977893 1458839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:18:06.985730 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 01:18:07.038065 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 01:18:08.562546 1458839 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.524397191s)
	I1218 01:18:08.562617 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 01:18:08.898358 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 01:18:08.986245 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 01:18:09.074376 1458839 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:18:09.074465 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:09.574899 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:10.074613 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:10.575106 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:11.075236 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:11.575318 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:12.074586 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:12.574774 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:13.074509 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:13.574566 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:14.075002 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:14.576063 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:15.074871 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:15.575379 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:16.075342 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:16.575259 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:17.074910 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:17.574523 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:18.074623 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:18.575242 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:19.075119 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:19.575182 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:20.074590 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:20.574602 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:21.075523 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:21.575468 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:22.074626 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:22.575349 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:23.075165 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:23.575298 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:24.075678 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:24.575552 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:25.074575 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:25.574875 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:26.076806 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:26.575377 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:27.075219 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:27.574532 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:28.074664 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:28.575202 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:29.075169 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:29.575419 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:30.074807 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:30.575360 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:31.075120 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:31.575515 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:32.074596 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:32.575106 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:33.074700 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:33.574738 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:34.075565 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:34.574910 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:35.075373 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:35.575448 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:36.074958 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:36.574573 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:37.075422 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:37.575324 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:38.075426 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:38.574768 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:39.074801 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:39.574750 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:40.075364 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:40.574704 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:41.074521 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:41.574984 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:42.075413 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:42.575407 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:43.074631 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:43.574586 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:44.075301 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:44.575061 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:45.075348 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:45.574617 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:46.075026 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:46.574833 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:47.074564 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:47.575429 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:48.074807 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:48.574588 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:49.074620 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:49.574794 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:50.075406 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:50.575298 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:51.075544 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:51.574653 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:52.075326 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:52.574600 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:53.074734 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:53.574612 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:54.075246 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:54.575450 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:55.074564 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:55.575272 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:56.074862 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:56.575480 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:57.074596 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:57.574628 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:58.074637 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:58.575096 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:59.074579 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:18:59.574826 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:00.074586 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:00.574720 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:01.074884 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:01.575499 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:02.075230 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:02.574947 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:03.074896 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:03.574633 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:04.075610 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:04.575330 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:05.074962 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:05.575345 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:06.075503 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:06.575113 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:07.075052 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:07.575035 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:08.074628 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:08.574917 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:09.074737 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:09.074846 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:09.102226 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:09.102256 1458839 cri.go:89] found id: ""
	I1218 01:19:09.102265 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:09.102339 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:09.106199 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:09.106275 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:09.134804 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:09.134830 1458839 cri.go:89] found id: ""
	I1218 01:19:09.134844 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:09.134902 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:09.138921 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:09.138998 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:09.170953 1458839 cri.go:89] found id: ""
	I1218 01:19:09.170976 1458839 logs.go:282] 0 containers: []
	W1218 01:19:09.170985 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:09.170991 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:09.171050 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:09.201552 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:09.201571 1458839 cri.go:89] found id: ""
	I1218 01:19:09.201579 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:09.201649 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:09.205556 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:09.205626 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:09.237763 1458839 cri.go:89] found id: ""
	I1218 01:19:09.237786 1458839 logs.go:282] 0 containers: []
	W1218 01:19:09.237796 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:09.237802 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:09.237866 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:09.264088 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:09.264107 1458839 cri.go:89] found id: ""
	I1218 01:19:09.264115 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:09.264171 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:09.267843 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:09.267917 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:09.292404 1458839 cri.go:89] found id: ""
	I1218 01:19:09.292427 1458839 logs.go:282] 0 containers: []
	W1218 01:19:09.292435 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:09.292442 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:09.292501 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:09.317880 1458839 cri.go:89] found id: ""
	I1218 01:19:09.317958 1458839 logs.go:282] 0 containers: []
	W1218 01:19:09.317981 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:09.318005 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:09.318043 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:09.379596 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:09.379633 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:09.396355 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:09.396387 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:09.471344 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:09.471365 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:09.471377 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:09.508780 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:09.508814 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:09.541482 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:09.541514 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:09.581229 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:09.581263 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:09.612646 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:09.612678 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:09.643394 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:09.643432 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:12.172755 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:12.183665 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:12.183733 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:12.216291 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:12.216309 1458839 cri.go:89] found id: ""
	I1218 01:19:12.216318 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:12.216374 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:12.220151 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:12.220272 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:12.245589 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:12.245656 1458839 cri.go:89] found id: ""
	I1218 01:19:12.245671 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:12.245735 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:12.249350 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:12.249423 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:12.273929 1458839 cri.go:89] found id: ""
	I1218 01:19:12.273994 1458839 logs.go:282] 0 containers: []
	W1218 01:19:12.274018 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:12.274036 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:12.274150 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:12.299231 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:12.299256 1458839 cri.go:89] found id: ""
	I1218 01:19:12.299264 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:12.299322 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:12.302963 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:12.303044 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:12.330932 1458839 cri.go:89] found id: ""
	I1218 01:19:12.330957 1458839 logs.go:282] 0 containers: []
	W1218 01:19:12.330965 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:12.330972 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:12.331039 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:12.361640 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:12.361708 1458839 cri.go:89] found id: ""
	I1218 01:19:12.361731 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:12.361813 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:12.365550 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:12.365653 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:12.390241 1458839 cri.go:89] found id: ""
	I1218 01:19:12.390308 1458839 logs.go:282] 0 containers: []
	W1218 01:19:12.390334 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:12.390346 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:12.390409 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:12.419078 1458839 cri.go:89] found id: ""
	I1218 01:19:12.419102 1458839 logs.go:282] 0 containers: []
	W1218 01:19:12.419110 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:12.419125 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:12.419137 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:12.446599 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:12.446627 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:12.504317 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:12.504357 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:12.519275 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:12.519348 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:12.553989 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:12.554025 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:12.597982 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:12.598019 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:12.629375 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:12.629410 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:12.697069 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:12.697090 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:12.697105 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:12.734026 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:12.734058 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:15.264789 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:15.275265 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:15.275402 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:15.301324 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:15.301364 1458839 cri.go:89] found id: ""
	I1218 01:19:15.301373 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:15.301441 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:15.305427 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:15.305548 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:15.332065 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:15.332089 1458839 cri.go:89] found id: ""
	I1218 01:19:15.332098 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:15.332193 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:15.336044 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:15.336130 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:15.363119 1458839 cri.go:89] found id: ""
	I1218 01:19:15.363157 1458839 logs.go:282] 0 containers: []
	W1218 01:19:15.363167 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:15.363174 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:15.363242 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:15.388821 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:15.388854 1458839 cri.go:89] found id: ""
	I1218 01:19:15.388862 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:15.388930 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:15.392698 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:15.392813 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:15.422607 1458839 cri.go:89] found id: ""
	I1218 01:19:15.422633 1458839 logs.go:282] 0 containers: []
	W1218 01:19:15.422642 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:15.422648 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:15.422750 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:15.448920 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:15.448945 1458839 cri.go:89] found id: ""
	I1218 01:19:15.448955 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:15.449012 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:15.452749 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:15.452823 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:15.478218 1458839 cri.go:89] found id: ""
	I1218 01:19:15.478248 1458839 logs.go:282] 0 containers: []
	W1218 01:19:15.478257 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:15.478264 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:15.478328 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:15.505245 1458839 cri.go:89] found id: ""
	I1218 01:19:15.505273 1458839 logs.go:282] 0 containers: []
	W1218 01:19:15.505282 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:15.505298 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:15.505310 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:15.520361 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:15.520392 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:15.586729 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:15.586804 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:15.586826 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:15.623052 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:15.623084 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:15.655925 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:15.655959 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:15.689434 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:15.689466 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:15.723177 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:15.723210 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:15.753550 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:15.753584 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:15.781193 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:15.781222 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:18.339675 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:18.349717 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:18.349796 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:18.374393 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:18.374418 1458839 cri.go:89] found id: ""
	I1218 01:19:18.374427 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:18.374483 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:18.378211 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:18.378297 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:18.404910 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:18.404978 1458839 cri.go:89] found id: ""
	I1218 01:19:18.405001 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:18.405073 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:18.408874 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:18.409017 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:18.433506 1458839 cri.go:89] found id: ""
	I1218 01:19:18.433532 1458839 logs.go:282] 0 containers: []
	W1218 01:19:18.433541 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:18.433547 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:18.433622 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:18.458247 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:18.458275 1458839 cri.go:89] found id: ""
	I1218 01:19:18.458284 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:18.458346 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:18.462108 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:18.462233 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:18.486456 1458839 cri.go:89] found id: ""
	I1218 01:19:18.486521 1458839 logs.go:282] 0 containers: []
	W1218 01:19:18.486536 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:18.486543 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:18.486605 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:18.515014 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:18.515037 1458839 cri.go:89] found id: ""
	I1218 01:19:18.515046 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:18.515102 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:18.518738 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:18.518813 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:18.543744 1458839 cri.go:89] found id: ""
	I1218 01:19:18.543767 1458839 logs.go:282] 0 containers: []
	W1218 01:19:18.543776 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:18.543782 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:18.543840 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:18.568401 1458839 cri.go:89] found id: ""
	I1218 01:19:18.568429 1458839 logs.go:282] 0 containers: []
	W1218 01:19:18.568454 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:18.568468 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:18.568481 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:18.604087 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:18.604123 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:18.631236 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:18.631264 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:18.659863 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:18.659891 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:18.735040 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:18.735060 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:18.735073 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:18.772182 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:18.772215 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:18.801804 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:18.801837 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:18.860598 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:18.860641 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:18.875848 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:18.875878 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:21.423653 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:21.433747 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:21.433820 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:21.459358 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:21.459381 1458839 cri.go:89] found id: ""
	I1218 01:19:21.459390 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:21.459444 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:21.463203 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:21.463280 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:21.491474 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:21.491497 1458839 cri.go:89] found id: ""
	I1218 01:19:21.491505 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:21.491560 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:21.495354 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:21.495433 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:21.520294 1458839 cri.go:89] found id: ""
	I1218 01:19:21.520319 1458839 logs.go:282] 0 containers: []
	W1218 01:19:21.520332 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:21.520340 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:21.520404 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:21.547314 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:21.547339 1458839 cri.go:89] found id: ""
	I1218 01:19:21.547347 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:21.547403 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:21.551115 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:21.551188 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:21.586807 1458839 cri.go:89] found id: ""
	I1218 01:19:21.586868 1458839 logs.go:282] 0 containers: []
	W1218 01:19:21.586892 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:21.586909 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:21.586983 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:21.612925 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:21.612946 1458839 cri.go:89] found id: ""
	I1218 01:19:21.612954 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:21.613011 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:21.616606 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:21.616720 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:21.647385 1458839 cri.go:89] found id: ""
	I1218 01:19:21.647467 1458839 logs.go:282] 0 containers: []
	W1218 01:19:21.647490 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:21.647510 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:21.647608 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:21.700010 1458839 cri.go:89] found id: ""
	I1218 01:19:21.700084 1458839 logs.go:282] 0 containers: []
	W1218 01:19:21.700107 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:21.700132 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:21.700171 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:21.735573 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:21.735661 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:21.772561 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:21.772662 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:21.834328 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:21.834364 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:21.908364 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:21.908386 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:21.908399 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:21.966009 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:21.966043 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:22.008663 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:22.008697 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:22.039159 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:22.039190 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:22.076565 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:22.076596 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:24.593073 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:24.603372 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:24.603445 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:24.632570 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:24.632593 1458839 cri.go:89] found id: ""
	I1218 01:19:24.632602 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:24.632682 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:24.636376 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:24.636449 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:24.662925 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:24.662948 1458839 cri.go:89] found id: ""
	I1218 01:19:24.662957 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:24.663013 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:24.666624 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:24.666744 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:24.701405 1458839 cri.go:89] found id: ""
	I1218 01:19:24.701470 1458839 logs.go:282] 0 containers: []
	W1218 01:19:24.701485 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:24.701492 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:24.701559 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:24.728794 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:24.728819 1458839 cri.go:89] found id: ""
	I1218 01:19:24.728828 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:24.728885 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:24.732712 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:24.732791 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:24.757575 1458839 cri.go:89] found id: ""
	I1218 01:19:24.757600 1458839 logs.go:282] 0 containers: []
	W1218 01:19:24.757618 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:24.757624 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:24.757694 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:24.781875 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:24.781898 1458839 cri.go:89] found id: ""
	I1218 01:19:24.781906 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:24.781980 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:24.785603 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:24.785677 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:24.814387 1458839 cri.go:89] found id: ""
	I1218 01:19:24.814458 1458839 logs.go:282] 0 containers: []
	W1218 01:19:24.814472 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:24.814480 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:24.814542 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:24.840188 1458839 cri.go:89] found id: ""
	I1218 01:19:24.840224 1458839 logs.go:282] 0 containers: []
	W1218 01:19:24.840233 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:24.840252 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:24.840265 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:24.897685 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:24.897717 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:24.937486 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:24.937518 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:24.967899 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:24.967933 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:24.998398 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:24.998437 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:25.034593 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:25.034626 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:25.050695 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:25.050724 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:25.120564 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:25.120589 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:25.120606 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:25.152259 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:25.152293 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:27.688219 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:27.698310 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:27.698382 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:27.723127 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:27.723151 1458839 cri.go:89] found id: ""
	I1218 01:19:27.723159 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:27.723217 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:27.726900 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:27.726977 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:27.752935 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:27.752957 1458839 cri.go:89] found id: ""
	I1218 01:19:27.752965 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:27.753028 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:27.756724 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:27.756797 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:27.781033 1458839 cri.go:89] found id: ""
	I1218 01:19:27.781056 1458839 logs.go:282] 0 containers: []
	W1218 01:19:27.781065 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:27.781071 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:27.781139 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:27.806931 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:27.806956 1458839 cri.go:89] found id: ""
	I1218 01:19:27.806976 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:27.807055 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:27.810986 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:27.811064 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:27.837208 1458839 cri.go:89] found id: ""
	I1218 01:19:27.837232 1458839 logs.go:282] 0 containers: []
	W1218 01:19:27.837252 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:27.837259 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:27.837322 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:27.862877 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:27.862900 1458839 cri.go:89] found id: ""
	I1218 01:19:27.862908 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:27.862966 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:27.866813 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:27.866890 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:27.898120 1458839 cri.go:89] found id: ""
	I1218 01:19:27.898147 1458839 logs.go:282] 0 containers: []
	W1218 01:19:27.898156 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:27.898163 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:27.898222 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:27.932186 1458839 cri.go:89] found id: ""
	I1218 01:19:27.932215 1458839 logs.go:282] 0 containers: []
	W1218 01:19:27.932224 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:27.932239 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:27.932250 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:27.970543 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:27.970583 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:28.015340 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:28.015372 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:28.031170 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:28.031201 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:28.066894 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:28.066929 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:28.108677 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:28.108713 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:28.136881 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:28.136909 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:28.194687 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:28.194724 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:28.259479 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:28.259539 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:28.259566 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:30.791808 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:30.802292 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:30.802369 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:30.828736 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:30.828760 1458839 cri.go:89] found id: ""
	I1218 01:19:30.828768 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:30.828828 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:30.832693 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:30.832767 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:30.858665 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:30.858685 1458839 cri.go:89] found id: ""
	I1218 01:19:30.858693 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:30.858758 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:30.862720 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:30.862796 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:30.888364 1458839 cri.go:89] found id: ""
	I1218 01:19:30.888397 1458839 logs.go:282] 0 containers: []
	W1218 01:19:30.888406 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:30.888413 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:30.888482 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:30.922093 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:30.922167 1458839 cri.go:89] found id: ""
	I1218 01:19:30.922190 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:30.922280 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:30.930000 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:30.930081 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:30.967903 1458839 cri.go:89] found id: ""
	I1218 01:19:30.967927 1458839 logs.go:282] 0 containers: []
	W1218 01:19:30.967936 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:30.967945 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:30.968005 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:31.012210 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:31.012244 1458839 cri.go:89] found id: ""
	I1218 01:19:31.012254 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:31.012322 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:31.018486 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:31.018578 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:31.055053 1458839 cri.go:89] found id: ""
	I1218 01:19:31.055089 1458839 logs.go:282] 0 containers: []
	W1218 01:19:31.055099 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:31.055105 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:31.055176 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:31.100278 1458839 cri.go:89] found id: ""
	I1218 01:19:31.100312 1458839 logs.go:282] 0 containers: []
	W1218 01:19:31.100322 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:31.100336 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:31.100348 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:31.138526 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:31.138561 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:31.172596 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:31.172661 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:31.215629 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:31.215659 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:31.232607 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:31.232648 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:31.315841 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:31.315871 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:31.315888 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:31.368506 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:31.368543 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:31.402352 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:31.402389 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:31.466163 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:31.466242 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:34.009774 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:34.020914 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:34.020993 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:34.047320 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:34.047354 1458839 cri.go:89] found id: ""
	I1218 01:19:34.047363 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:34.047425 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:34.051271 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:34.051345 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:34.079678 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:34.079703 1458839 cri.go:89] found id: ""
	I1218 01:19:34.079711 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:34.079801 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:34.083853 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:34.083956 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:34.108801 1458839 cri.go:89] found id: ""
	I1218 01:19:34.108831 1458839 logs.go:282] 0 containers: []
	W1218 01:19:34.108840 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:34.108847 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:34.108912 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:34.134834 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:34.134867 1458839 cri.go:89] found id: ""
	I1218 01:19:34.134877 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:34.134946 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:34.138895 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:34.138999 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:34.164832 1458839 cri.go:89] found id: ""
	I1218 01:19:34.164859 1458839 logs.go:282] 0 containers: []
	W1218 01:19:34.164881 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:34.164888 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:34.164961 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:34.195216 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:34.195286 1458839 cri.go:89] found id: ""
	I1218 01:19:34.195308 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:34.195397 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:34.199317 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:34.199397 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:34.224111 1458839 cri.go:89] found id: ""
	I1218 01:19:34.224140 1458839 logs.go:282] 0 containers: []
	W1218 01:19:34.224149 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:34.224155 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:34.224220 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:34.250127 1458839 cri.go:89] found id: ""
	I1218 01:19:34.250150 1458839 logs.go:282] 0 containers: []
	W1218 01:19:34.250158 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:34.250175 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:34.250188 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:34.318860 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:34.318884 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:34.318898 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:34.347281 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:34.347318 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:34.404846 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:34.404882 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:34.420095 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:34.420124 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:34.459106 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:34.459146 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:34.496297 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:34.496328 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:34.533844 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:34.533877 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:34.564760 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:34.564787 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:37.098886 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:37.109587 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:37.109672 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:37.149788 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:37.149814 1458839 cri.go:89] found id: ""
	I1218 01:19:37.149822 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:37.149923 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:37.156778 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:37.156905 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:37.188427 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:37.188526 1458839 cri.go:89] found id: ""
	I1218 01:19:37.188551 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:37.188678 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:37.193400 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:37.193567 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:37.242988 1458839 cri.go:89] found id: ""
	I1218 01:19:37.243066 1458839 logs.go:282] 0 containers: []
	W1218 01:19:37.243091 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:37.243110 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:37.243200 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:37.295149 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:37.295225 1458839 cri.go:89] found id: ""
	I1218 01:19:37.295248 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:37.295332 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:37.299850 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:37.299971 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:37.351101 1458839 cri.go:89] found id: ""
	I1218 01:19:37.351189 1458839 logs.go:282] 0 containers: []
	W1218 01:19:37.351213 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:37.351235 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:37.351348 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:37.389822 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:37.389903 1458839 cri.go:89] found id: ""
	I1218 01:19:37.389934 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:37.390032 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:37.394670 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:37.394792 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:37.431435 1458839 cri.go:89] found id: ""
	I1218 01:19:37.431513 1458839 logs.go:282] 0 containers: []
	W1218 01:19:37.431536 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:37.431554 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:37.431653 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:37.467278 1458839 cri.go:89] found id: ""
	I1218 01:19:37.467353 1458839 logs.go:282] 0 containers: []
	W1218 01:19:37.467404 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:37.467432 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:37.467471 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:37.515848 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:37.515938 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:37.565151 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:37.565245 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:37.621983 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:37.622362 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:37.681234 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:37.681334 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:37.714770 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:37.714844 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:37.832430 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:37.832450 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:37.832462 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:37.872379 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:37.872468 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:37.912695 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:37.912774 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:40.478894 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:40.489246 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:40.489320 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:40.515928 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:40.515948 1458839 cri.go:89] found id: ""
	I1218 01:19:40.515956 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:40.516014 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:40.519556 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:40.519633 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:40.549024 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:40.549047 1458839 cri.go:89] found id: ""
	I1218 01:19:40.549055 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:40.549113 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:40.552948 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:40.553025 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:40.581839 1458839 cri.go:89] found id: ""
	I1218 01:19:40.581876 1458839 logs.go:282] 0 containers: []
	W1218 01:19:40.581886 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:40.581892 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:40.581957 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:40.609432 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:40.609450 1458839 cri.go:89] found id: ""
	I1218 01:19:40.609459 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:40.609517 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:40.613092 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:40.613169 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:40.639256 1458839 cri.go:89] found id: ""
	I1218 01:19:40.639279 1458839 logs.go:282] 0 containers: []
	W1218 01:19:40.639288 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:40.639294 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:40.639353 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:40.670424 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:40.670445 1458839 cri.go:89] found id: ""
	I1218 01:19:40.670453 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:40.670510 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:40.674593 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:40.674668 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:40.713937 1458839 cri.go:89] found id: ""
	I1218 01:19:40.713960 1458839 logs.go:282] 0 containers: []
	W1218 01:19:40.713969 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:40.713976 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:40.714081 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:40.742187 1458839 cri.go:89] found id: ""
	I1218 01:19:40.742266 1458839 logs.go:282] 0 containers: []
	W1218 01:19:40.742280 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:40.742296 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:40.742308 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:40.777318 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:40.777353 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:40.808350 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:40.808379 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:40.870821 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:40.870858 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:40.887031 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:40.887075 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:40.956173 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:40.956195 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:40.956209 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:41.000989 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:41.001050 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:41.033886 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:41.033916 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:41.063507 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:41.063544 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:43.598554 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:43.609349 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:43.609429 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:43.635750 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:43.635773 1458839 cri.go:89] found id: ""
	I1218 01:19:43.635781 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:43.635839 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:43.639524 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:43.639598 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:43.684866 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:43.684946 1458839 cri.go:89] found id: ""
	I1218 01:19:43.684981 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:43.685077 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:43.689602 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:43.689723 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:43.718061 1458839 cri.go:89] found id: ""
	I1218 01:19:43.718085 1458839 logs.go:282] 0 containers: []
	W1218 01:19:43.718093 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:43.718100 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:43.718167 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:43.750022 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:43.750067 1458839 cri.go:89] found id: ""
	I1218 01:19:43.750075 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:43.750159 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:43.753727 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:43.753807 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:43.777837 1458839 cri.go:89] found id: ""
	I1218 01:19:43.777861 1458839 logs.go:282] 0 containers: []
	W1218 01:19:43.777870 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:43.777876 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:43.777936 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:43.808114 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:43.808137 1458839 cri.go:89] found id: ""
	I1218 01:19:43.808145 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:43.808201 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:43.811729 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:43.811802 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:43.838004 1458839 cri.go:89] found id: ""
	I1218 01:19:43.838034 1458839 logs.go:282] 0 containers: []
	W1218 01:19:43.838050 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:43.838057 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:43.838143 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:43.863153 1458839 cri.go:89] found id: ""
	I1218 01:19:43.863176 1458839 logs.go:282] 0 containers: []
	W1218 01:19:43.863185 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:43.863231 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:43.863250 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:43.890302 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:43.890334 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:43.921206 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:43.921241 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:43.983845 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:43.983881 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:44.054338 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:44.054407 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:44.054428 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:44.103817 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:44.103847 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:44.136651 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:44.136684 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:44.164074 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:44.164105 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:44.178466 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:44.178493 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:46.714857 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:46.726422 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:46.726499 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:46.751480 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:46.751505 1458839 cri.go:89] found id: ""
	I1218 01:19:46.751514 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:46.751569 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:46.755180 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:46.755259 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:46.787823 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:46.787849 1458839 cri.go:89] found id: ""
	I1218 01:19:46.787868 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:46.787932 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:46.791638 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:46.791713 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:46.818050 1458839 cri.go:89] found id: ""
	I1218 01:19:46.818074 1458839 logs.go:282] 0 containers: []
	W1218 01:19:46.818082 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:46.818088 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:46.818148 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:46.843556 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:46.843618 1458839 cri.go:89] found id: ""
	I1218 01:19:46.843657 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:46.843764 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:46.847646 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:46.847762 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:46.876894 1458839 cri.go:89] found id: ""
	I1218 01:19:46.876919 1458839 logs.go:282] 0 containers: []
	W1218 01:19:46.876929 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:46.876936 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:46.877028 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:46.902259 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:46.902282 1458839 cri.go:89] found id: ""
	I1218 01:19:46.902290 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:46.902367 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:46.906289 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:46.906367 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:46.930770 1458839 cri.go:89] found id: ""
	I1218 01:19:46.930797 1458839 logs.go:282] 0 containers: []
	W1218 01:19:46.930807 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:46.930813 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:46.930875 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:46.956037 1458839 cri.go:89] found id: ""
	I1218 01:19:46.956117 1458839 logs.go:282] 0 containers: []
	W1218 01:19:46.956141 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:46.956180 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:46.956211 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:46.991240 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:46.991270 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:47.028704 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:47.028737 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:47.043659 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:47.043688 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:47.078209 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:47.078248 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:47.107350 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:47.107379 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:47.136480 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:47.136514 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:47.169262 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:47.169305 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:47.227364 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:47.227400 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:47.292910 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:49.794495 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:49.805243 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:49.805314 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:49.831375 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:49.831396 1458839 cri.go:89] found id: ""
	I1218 01:19:49.831405 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:49.831462 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:49.835001 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:49.835074 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:49.861062 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:49.861084 1458839 cri.go:89] found id: ""
	I1218 01:19:49.861092 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:49.861147 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:49.864679 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:49.864757 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:49.894305 1458839 cri.go:89] found id: ""
	I1218 01:19:49.894372 1458839 logs.go:282] 0 containers: []
	W1218 01:19:49.894391 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:49.894399 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:49.894463 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:49.923914 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:49.923937 1458839 cri.go:89] found id: ""
	I1218 01:19:49.923945 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:49.924001 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:49.927832 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:49.927906 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:49.962918 1458839 cri.go:89] found id: ""
	I1218 01:19:49.962984 1458839 logs.go:282] 0 containers: []
	W1218 01:19:49.963000 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:49.963007 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:49.963071 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:49.988249 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:49.988271 1458839 cri.go:89] found id: ""
	I1218 01:19:49.988280 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:49.988336 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:49.991976 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:49.992053 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:50.023988 1458839 cri.go:89] found id: ""
	I1218 01:19:50.024015 1458839 logs.go:282] 0 containers: []
	W1218 01:19:50.024026 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:50.024032 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:50.024097 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:50.055879 1458839 cri.go:89] found id: ""
	I1218 01:19:50.055905 1458839 logs.go:282] 0 containers: []
	W1218 01:19:50.055915 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:50.055932 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:50.055944 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:50.114097 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:50.114136 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:50.130307 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:50.130389 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:50.195590 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:50.195616 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:50.195629 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:50.229202 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:50.229236 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:50.260205 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:50.260239 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:50.287991 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:50.288018 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:50.317322 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:50.317354 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:50.354133 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:50.354160 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:52.889757 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:52.900069 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:52.900178 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:52.925565 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:52.925589 1458839 cri.go:89] found id: ""
	I1218 01:19:52.925597 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:52.925658 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:52.929508 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:52.929592 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:52.955651 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:52.955673 1458839 cri.go:89] found id: ""
	I1218 01:19:52.955681 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:52.955739 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:52.959710 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:52.959782 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:52.984758 1458839 cri.go:89] found id: ""
	I1218 01:19:52.984780 1458839 logs.go:282] 0 containers: []
	W1218 01:19:52.984789 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:52.984795 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:52.984854 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:53.011538 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:53.011560 1458839 cri.go:89] found id: ""
	I1218 01:19:53.011570 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:53.011634 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:53.015766 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:53.015841 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:53.041820 1458839 cri.go:89] found id: ""
	I1218 01:19:53.041844 1458839 logs.go:282] 0 containers: []
	W1218 01:19:53.041853 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:53.041859 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:53.041922 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:53.068488 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:53.068509 1458839 cri.go:89] found id: ""
	I1218 01:19:53.068518 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:53.068580 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:53.072564 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:53.072668 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:53.099015 1458839 cri.go:89] found id: ""
	I1218 01:19:53.099043 1458839 logs.go:282] 0 containers: []
	W1218 01:19:53.099052 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:53.099058 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:53.099119 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:53.125701 1458839 cri.go:89] found id: ""
	I1218 01:19:53.125778 1458839 logs.go:282] 0 containers: []
	W1218 01:19:53.125793 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:53.125809 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:53.125820 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:53.189902 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:53.189945 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:53.205988 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:53.206017 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:53.241359 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:53.241394 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:53.273133 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:53.273165 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:53.303088 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:53.303117 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:53.371848 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:53.371913 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:53.371940 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:53.438264 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:53.438326 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:53.483885 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:53.483923 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:56.013999 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:56.024995 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:56.025071 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:56.052051 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:56.052081 1458839 cri.go:89] found id: ""
	I1218 01:19:56.052090 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:56.052181 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:56.056070 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:56.056200 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:56.086602 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:56.086624 1458839 cri.go:89] found id: ""
	I1218 01:19:56.086634 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:56.086712 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:56.090634 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:56.090712 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:56.116294 1458839 cri.go:89] found id: ""
	I1218 01:19:56.116320 1458839 logs.go:282] 0 containers: []
	W1218 01:19:56.116329 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:56.116335 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:56.116395 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:56.142358 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:56.142379 1458839 cri.go:89] found id: ""
	I1218 01:19:56.142387 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:56.142442 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:56.146324 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:56.146449 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:56.170381 1458839 cri.go:89] found id: ""
	I1218 01:19:56.170406 1458839 logs.go:282] 0 containers: []
	W1218 01:19:56.170415 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:56.170422 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:56.170501 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:56.199408 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:56.199431 1458839 cri.go:89] found id: ""
	I1218 01:19:56.199439 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:56.199518 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:56.203252 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:56.203342 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:56.227366 1458839 cri.go:89] found id: ""
	I1218 01:19:56.227392 1458839 logs.go:282] 0 containers: []
	W1218 01:19:56.227401 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:56.227407 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:56.227481 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:56.252792 1458839 cri.go:89] found id: ""
	I1218 01:19:56.252871 1458839 logs.go:282] 0 containers: []
	W1218 01:19:56.252894 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:56.252938 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:56.252966 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:56.310601 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:56.310639 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:56.325728 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:56.325765 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:56.406614 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:56.406644 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:56.406656 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:56.472119 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:56.472155 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:56.509684 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:56.509719 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:56.547965 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:56.547999 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:56.577776 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:56.577809 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:56.606045 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:56.606076 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:59.136752 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:19:59.148201 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:19:59.148325 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:19:59.189865 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:59.189929 1458839 cri.go:89] found id: ""
	I1218 01:19:59.189953 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:19:59.190038 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:59.195203 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:19:59.195358 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:19:59.234732 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:59.234810 1458839 cri.go:89] found id: ""
	I1218 01:19:59.234833 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:19:59.234909 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:59.239835 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:19:59.239955 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:19:59.274240 1458839 cri.go:89] found id: ""
	I1218 01:19:59.274325 1458839 logs.go:282] 0 containers: []
	W1218 01:19:59.274356 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:19:59.274376 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:19:59.274469 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:19:59.307104 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:59.307141 1458839 cri.go:89] found id: ""
	I1218 01:19:59.307150 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:19:59.307234 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:59.314072 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:19:59.314200 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:19:59.356413 1458839 cri.go:89] found id: ""
	I1218 01:19:59.356441 1458839 logs.go:282] 0 containers: []
	W1218 01:19:59.356450 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:19:59.356456 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:19:59.356540 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:19:59.386349 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:19:59.386375 1458839 cri.go:89] found id: ""
	I1218 01:19:59.386384 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:19:59.386465 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:19:59.401167 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:19:59.401289 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:19:59.461485 1458839 cri.go:89] found id: ""
	I1218 01:19:59.461524 1458839 logs.go:282] 0 containers: []
	W1218 01:19:59.461532 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:19:59.461538 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:19:59.461644 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:19:59.507874 1458839 cri.go:89] found id: ""
	I1218 01:19:59.507909 1458839 logs.go:282] 0 containers: []
	W1218 01:19:59.507919 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:19:59.507950 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:19:59.507974 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:19:59.584026 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:19:59.584059 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:19:59.601582 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:19:59.601639 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:19:59.696472 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:19:59.696505 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:19:59.696519 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:19:59.745796 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:19:59.745832 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:19:59.800860 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:19:59.800895 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:19:59.838166 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:19:59.838202 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:19:59.871130 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:19:59.871165 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:19:59.944091 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:19:59.944121 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:02.479761 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:02.490859 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:02.490931 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:02.519748 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:02.519773 1458839 cri.go:89] found id: ""
	I1218 01:20:02.519783 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:02.519845 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:02.524009 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:02.524102 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:02.552817 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:02.552839 1458839 cri.go:89] found id: ""
	I1218 01:20:02.552848 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:02.552917 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:02.556748 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:02.556826 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:02.586254 1458839 cri.go:89] found id: ""
	I1218 01:20:02.586278 1458839 logs.go:282] 0 containers: []
	W1218 01:20:02.586287 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:02.586293 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:02.586352 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:02.613403 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:02.613427 1458839 cri.go:89] found id: ""
	I1218 01:20:02.613435 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:02.613498 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:02.617318 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:02.617392 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:02.643699 1458839 cri.go:89] found id: ""
	I1218 01:20:02.643727 1458839 logs.go:282] 0 containers: []
	W1218 01:20:02.643737 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:02.643744 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:02.643806 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:02.677778 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:02.677801 1458839 cri.go:89] found id: ""
	I1218 01:20:02.677810 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:02.677871 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:02.683386 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:02.683465 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:02.723606 1458839 cri.go:89] found id: ""
	I1218 01:20:02.723633 1458839 logs.go:282] 0 containers: []
	W1218 01:20:02.723642 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:02.723648 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:02.723715 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:02.751262 1458839 cri.go:89] found id: ""
	I1218 01:20:02.751289 1458839 logs.go:282] 0 containers: []
	W1218 01:20:02.751298 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:02.751312 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:02.751323 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:02.779103 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:02.779130 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:02.809953 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:02.809991 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:02.850529 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:02.850559 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:02.911705 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:02.911799 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:03.002828 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:03.002903 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:03.002938 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:03.057521 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:03.057603 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:03.077698 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:03.077723 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:03.141580 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:03.141657 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:05.692789 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:05.704882 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:05.704966 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:05.732038 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:05.732059 1458839 cri.go:89] found id: ""
	I1218 01:20:05.732067 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:05.732126 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:05.735963 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:05.736037 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:05.764569 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:05.764593 1458839 cri.go:89] found id: ""
	I1218 01:20:05.764602 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:05.764688 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:05.768552 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:05.768655 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:05.798898 1458839 cri.go:89] found id: ""
	I1218 01:20:05.798924 1458839 logs.go:282] 0 containers: []
	W1218 01:20:05.798933 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:05.798940 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:05.799047 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:05.824252 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:05.824274 1458839 cri.go:89] found id: ""
	I1218 01:20:05.824328 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:05.824474 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:05.828751 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:05.828826 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:05.854958 1458839 cri.go:89] found id: ""
	I1218 01:20:05.854986 1458839 logs.go:282] 0 containers: []
	W1218 01:20:05.854996 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:05.855003 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:05.855067 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:05.883279 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:05.883303 1458839 cri.go:89] found id: ""
	I1218 01:20:05.883311 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:05.883367 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:05.887154 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:05.887232 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:05.912750 1458839 cri.go:89] found id: ""
	I1218 01:20:05.912772 1458839 logs.go:282] 0 containers: []
	W1218 01:20:05.912780 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:05.912786 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:05.912849 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:05.938942 1458839 cri.go:89] found id: ""
	I1218 01:20:05.938964 1458839 logs.go:282] 0 containers: []
	W1218 01:20:05.938973 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:05.938988 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:05.939000 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:05.973183 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:05.973212 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:06.008587 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:06.008646 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:06.040571 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:06.040604 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:06.071405 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:06.071448 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:06.112348 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:06.112381 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:06.172251 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:06.172287 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:06.187506 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:06.187534 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:06.257169 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:06.257188 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:06.257202 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:08.793950 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:08.804765 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:08.804841 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:08.830457 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:08.830486 1458839 cri.go:89] found id: ""
	I1218 01:20:08.830495 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:08.830554 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:08.834453 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:08.834529 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:08.863398 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:08.863421 1458839 cri.go:89] found id: ""
	I1218 01:20:08.863429 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:08.863489 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:08.867172 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:08.867245 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:08.892925 1458839 cri.go:89] found id: ""
	I1218 01:20:08.892948 1458839 logs.go:282] 0 containers: []
	W1218 01:20:08.892957 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:08.892963 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:08.893023 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:08.919575 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:08.919595 1458839 cri.go:89] found id: ""
	I1218 01:20:08.919603 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:08.919659 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:08.923621 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:08.923744 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:08.953287 1458839 cri.go:89] found id: ""
	I1218 01:20:08.953312 1458839 logs.go:282] 0 containers: []
	W1218 01:20:08.953321 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:08.953328 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:08.953386 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:08.978971 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:08.978998 1458839 cri.go:89] found id: ""
	I1218 01:20:08.979006 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:08.979062 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:08.982795 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:08.982912 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:09.010802 1458839 cri.go:89] found id: ""
	I1218 01:20:09.010869 1458839 logs.go:282] 0 containers: []
	W1218 01:20:09.010885 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:09.010894 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:09.010960 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:09.040652 1458839 cri.go:89] found id: ""
	I1218 01:20:09.040678 1458839 logs.go:282] 0 containers: []
	W1218 01:20:09.040687 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:09.040701 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:09.040713 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:09.107849 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:09.107867 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:09.107883 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:09.165528 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:09.165563 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:09.201441 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:09.201474 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:09.233795 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:09.233828 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:09.275221 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:09.275254 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:09.303482 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:09.303509 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:09.333048 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:09.333083 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:09.362348 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:09.362375 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:11.878935 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:11.889532 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:11.889607 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:11.916145 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:11.916169 1458839 cri.go:89] found id: ""
	I1218 01:20:11.916178 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:11.916236 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:11.919885 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:11.919957 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:11.945351 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:11.945424 1458839 cri.go:89] found id: ""
	I1218 01:20:11.945440 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:11.945507 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:11.949493 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:11.949594 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:11.975135 1458839 cri.go:89] found id: ""
	I1218 01:20:11.975158 1458839 logs.go:282] 0 containers: []
	W1218 01:20:11.975167 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:11.975175 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:11.975242 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:12.002005 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:12.002040 1458839 cri.go:89] found id: ""
	I1218 01:20:12.002050 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:12.002119 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:12.007526 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:12.007613 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:12.038111 1458839 cri.go:89] found id: ""
	I1218 01:20:12.038135 1458839 logs.go:282] 0 containers: []
	W1218 01:20:12.038143 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:12.038150 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:12.038210 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:12.064251 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:12.064284 1458839 cri.go:89] found id: ""
	I1218 01:20:12.064294 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:12.064364 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:12.068326 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:12.068400 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:12.093044 1458839 cri.go:89] found id: ""
	I1218 01:20:12.093071 1458839 logs.go:282] 0 containers: []
	W1218 01:20:12.093080 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:12.093088 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:12.093150 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:12.117750 1458839 cri.go:89] found id: ""
	I1218 01:20:12.117826 1458839 logs.go:282] 0 containers: []
	W1218 01:20:12.117841 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:12.117856 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:12.117867 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:12.175060 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:12.175093 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:12.213823 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:12.213855 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:12.247814 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:12.247850 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:12.286320 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:12.286356 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:12.317095 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:12.317124 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:12.346426 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:12.346457 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:12.361619 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:12.361652 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:12.434757 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:12.434778 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:12.434791 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:14.987372 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:15.005034 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:15.005131 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:15.069205 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:15.069228 1458839 cri.go:89] found id: ""
	I1218 01:20:15.069237 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:15.069302 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:15.074279 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:15.074357 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:15.104085 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:15.104104 1458839 cri.go:89] found id: ""
	I1218 01:20:15.104121 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:15.104181 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:15.109144 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:15.109222 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:15.142043 1458839 cri.go:89] found id: ""
	I1218 01:20:15.142121 1458839 logs.go:282] 0 containers: []
	W1218 01:20:15.142151 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:15.142173 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:15.142271 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:15.171309 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:15.171383 1458839 cri.go:89] found id: ""
	I1218 01:20:15.171404 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:15.171488 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:15.176319 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:15.176442 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:15.205742 1458839 cri.go:89] found id: ""
	I1218 01:20:15.205819 1458839 logs.go:282] 0 containers: []
	W1218 01:20:15.205844 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:15.205863 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:15.205953 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:15.235346 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:15.235368 1458839 cri.go:89] found id: ""
	I1218 01:20:15.235376 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:15.235444 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:15.239987 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:15.240069 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:15.275416 1458839 cri.go:89] found id: ""
	I1218 01:20:15.275449 1458839 logs.go:282] 0 containers: []
	W1218 01:20:15.275458 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:15.275465 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:15.275538 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:15.302613 1458839 cri.go:89] found id: ""
	I1218 01:20:15.302695 1458839 logs.go:282] 0 containers: []
	W1218 01:20:15.302718 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:15.302746 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:15.302785 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:15.367185 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:15.367220 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:15.382486 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:15.382516 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:15.444045 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:15.444087 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:15.519601 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:15.519642 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:15.579336 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:15.579376 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:15.688861 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:15.688926 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:15.688952 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:15.734217 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:15.734296 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:15.772262 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:15.772334 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:18.307238 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:18.319000 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:18.319081 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:18.344700 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:18.344718 1458839 cri.go:89] found id: ""
	I1218 01:20:18.344725 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:18.344782 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:18.348445 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:18.348522 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:18.375992 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:18.376016 1458839 cri.go:89] found id: ""
	I1218 01:20:18.376027 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:18.376086 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:18.379863 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:18.379935 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:18.422152 1458839 cri.go:89] found id: ""
	I1218 01:20:18.422226 1458839 logs.go:282] 0 containers: []
	W1218 01:20:18.422249 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:18.422266 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:18.422351 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:18.450001 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:18.450036 1458839 cri.go:89] found id: ""
	I1218 01:20:18.450044 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:18.450102 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:18.455576 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:18.455699 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:18.484571 1458839 cri.go:89] found id: ""
	I1218 01:20:18.484605 1458839 logs.go:282] 0 containers: []
	W1218 01:20:18.484614 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:18.484647 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:18.484713 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:18.512081 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:18.512102 1458839 cri.go:89] found id: ""
	I1218 01:20:18.512110 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:18.512168 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:18.516134 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:18.516215 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:18.541752 1458839 cri.go:89] found id: ""
	I1218 01:20:18.541781 1458839 logs.go:282] 0 containers: []
	W1218 01:20:18.541789 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:18.541797 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:18.541860 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:18.571920 1458839 cri.go:89] found id: ""
	I1218 01:20:18.571945 1458839 logs.go:282] 0 containers: []
	W1218 01:20:18.571961 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:18.571976 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:18.571989 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:18.634746 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:18.634766 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:18.634779 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:18.669996 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:18.670035 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:18.710002 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:18.710041 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:18.738748 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:18.738778 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:18.753537 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:18.753563 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:18.786288 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:18.786320 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:18.816063 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:18.816091 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:18.847317 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:18.847351 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:21.406303 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:21.417617 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:21.417706 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:21.444304 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:21.444330 1458839 cri.go:89] found id: ""
	I1218 01:20:21.444338 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:21.444393 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:21.448894 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:21.448969 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:21.478732 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:21.478806 1458839 cri.go:89] found id: ""
	I1218 01:20:21.478821 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:21.478889 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:21.482732 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:21.482813 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:21.509307 1458839 cri.go:89] found id: ""
	I1218 01:20:21.509331 1458839 logs.go:282] 0 containers: []
	W1218 01:20:21.509340 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:21.509347 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:21.509408 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:21.539836 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:21.539860 1458839 cri.go:89] found id: ""
	I1218 01:20:21.539869 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:21.539935 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:21.543727 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:21.543832 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:21.569432 1458839 cri.go:89] found id: ""
	I1218 01:20:21.569465 1458839 logs.go:282] 0 containers: []
	W1218 01:20:21.569475 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:21.569482 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:21.569552 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:21.595044 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:21.595066 1458839 cri.go:89] found id: ""
	I1218 01:20:21.595074 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:21.595130 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:21.599066 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:21.599140 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:21.631678 1458839 cri.go:89] found id: ""
	I1218 01:20:21.631714 1458839 logs.go:282] 0 containers: []
	W1218 01:20:21.631724 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:21.631731 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:21.631807 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:21.673504 1458839 cri.go:89] found id: ""
	I1218 01:20:21.673536 1458839 logs.go:282] 0 containers: []
	W1218 01:20:21.673545 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:21.673561 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:21.673576 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:21.713063 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:21.713157 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:21.748992 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:21.749021 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:21.783096 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:21.783137 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:21.875897 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:21.875919 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:21.875932 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:21.914070 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:21.914107 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:21.955800 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:21.955841 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:21.998526 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:21.998556 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:22.068493 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:22.068532 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:24.587706 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:24.598568 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:24.598642 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:24.627412 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:24.627433 1458839 cri.go:89] found id: ""
	I1218 01:20:24.627442 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:24.627502 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:24.631602 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:24.631686 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:24.659460 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:24.659485 1458839 cri.go:89] found id: ""
	I1218 01:20:24.659494 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:24.659553 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:24.663475 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:24.663557 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:24.693746 1458839 cri.go:89] found id: ""
	I1218 01:20:24.693774 1458839 logs.go:282] 0 containers: []
	W1218 01:20:24.693783 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:24.693802 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:24.693866 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:24.719922 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:24.719947 1458839 cri.go:89] found id: ""
	I1218 01:20:24.719956 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:24.720014 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:24.723795 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:24.723880 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:24.749719 1458839 cri.go:89] found id: ""
	I1218 01:20:24.749743 1458839 logs.go:282] 0 containers: []
	W1218 01:20:24.749752 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:24.749758 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:24.749870 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:24.775009 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:24.775031 1458839 cri.go:89] found id: ""
	I1218 01:20:24.775040 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:24.775099 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:24.779150 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:24.779261 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:24.808606 1458839 cri.go:89] found id: ""
	I1218 01:20:24.808658 1458839 logs.go:282] 0 containers: []
	W1218 01:20:24.808668 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:24.808675 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:24.808778 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:24.839207 1458839 cri.go:89] found id: ""
	I1218 01:20:24.839233 1458839 logs.go:282] 0 containers: []
	W1218 01:20:24.839243 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:24.839278 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:24.839296 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:24.873922 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:24.873953 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:24.924464 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:24.924564 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:24.954138 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:24.954171 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:25.014901 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:25.014951 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:25.036272 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:25.036303 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:25.117236 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:25.117259 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:25.117271 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:25.154548 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:25.154581 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:25.190080 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:25.190119 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:27.740772 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:27.751995 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:27.752070 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:27.777760 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:27.777783 1458839 cri.go:89] found id: ""
	I1218 01:20:27.777792 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:27.777849 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:27.781840 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:27.781920 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:27.808478 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:27.808498 1458839 cri.go:89] found id: ""
	I1218 01:20:27.808506 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:27.808564 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:27.812334 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:27.812412 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:27.842029 1458839 cri.go:89] found id: ""
	I1218 01:20:27.842105 1458839 logs.go:282] 0 containers: []
	W1218 01:20:27.842130 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:27.842138 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:27.842207 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:27.867514 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:27.867536 1458839 cri.go:89] found id: ""
	I1218 01:20:27.867549 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:27.867608 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:27.871574 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:27.871648 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:27.901759 1458839 cri.go:89] found id: ""
	I1218 01:20:27.901782 1458839 logs.go:282] 0 containers: []
	W1218 01:20:27.901791 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:27.901797 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:27.901876 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:27.934744 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:27.934766 1458839 cri.go:89] found id: ""
	I1218 01:20:27.934783 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:27.934846 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:27.938803 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:27.938878 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:27.967151 1458839 cri.go:89] found id: ""
	I1218 01:20:27.967176 1458839 logs.go:282] 0 containers: []
	W1218 01:20:27.967186 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:27.967194 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:27.967254 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:28.005587 1458839 cri.go:89] found id: ""
	I1218 01:20:28.005615 1458839 logs.go:282] 0 containers: []
	W1218 01:20:28.005624 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:28.005642 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:28.005657 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:28.046230 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:28.046264 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:28.083059 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:28.083103 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:28.119920 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:28.119956 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:28.182667 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:28.182705 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:28.241977 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:28.242017 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:28.271001 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:28.271033 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:28.304703 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:28.304736 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:28.320022 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:28.320051 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:28.384954 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:30.886222 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:30.898001 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:30.898082 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:30.933994 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:30.934022 1458839 cri.go:89] found id: ""
	I1218 01:20:30.934031 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:30.934089 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:30.938038 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:30.938112 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:30.964439 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:30.964460 1458839 cri.go:89] found id: ""
	I1218 01:20:30.964469 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:30.964529 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:30.968596 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:30.968701 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:30.993962 1458839 cri.go:89] found id: ""
	I1218 01:20:30.993991 1458839 logs.go:282] 0 containers: []
	W1218 01:20:30.993999 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:30.994006 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:30.994075 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:31.020899 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:31.020925 1458839 cri.go:89] found id: ""
	I1218 01:20:31.020940 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:31.021000 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:31.024934 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:31.025010 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:31.057677 1458839 cri.go:89] found id: ""
	I1218 01:20:31.057715 1458839 logs.go:282] 0 containers: []
	W1218 01:20:31.057724 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:31.057730 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:31.057826 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:31.083640 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:31.083664 1458839 cri.go:89] found id: ""
	I1218 01:20:31.083674 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:31.083743 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:31.087703 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:31.087778 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:31.115609 1458839 cri.go:89] found id: ""
	I1218 01:20:31.115635 1458839 logs.go:282] 0 containers: []
	W1218 01:20:31.115644 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:31.115651 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:31.115722 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:31.155665 1458839 cri.go:89] found id: ""
	I1218 01:20:31.155693 1458839 logs.go:282] 0 containers: []
	W1218 01:20:31.155703 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:31.155717 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:31.155728 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:31.225647 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:31.225682 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:31.240676 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:31.240702 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:31.305226 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:31.305248 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:31.305262 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:31.342485 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:31.342518 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:31.382342 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:31.382374 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:31.414192 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:31.414228 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:31.441264 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:31.441295 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:31.470725 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:31.470759 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:34.004828 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:34.018476 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:34.018602 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:34.050491 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:34.050566 1458839 cri.go:89] found id: ""
	I1218 01:20:34.050609 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:34.050725 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:34.055749 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:34.055880 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:34.096275 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:34.096346 1458839 cri.go:89] found id: ""
	I1218 01:20:34.096379 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:34.096466 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:34.101145 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:34.101271 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:34.152847 1458839 cri.go:89] found id: ""
	I1218 01:20:34.152920 1458839 logs.go:282] 0 containers: []
	W1218 01:20:34.152943 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:34.152962 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:34.153055 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:34.243718 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:34.243781 1458839 cri.go:89] found id: ""
	I1218 01:20:34.243811 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:34.243897 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:34.250441 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:34.250562 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:34.295523 1458839 cri.go:89] found id: ""
	I1218 01:20:34.295597 1458839 logs.go:282] 0 containers: []
	W1218 01:20:34.295627 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:34.295646 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:34.295753 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:34.328279 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:34.328352 1458839 cri.go:89] found id: ""
	I1218 01:20:34.328374 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:34.328459 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:34.332614 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:34.332777 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:34.370997 1458839 cri.go:89] found id: ""
	I1218 01:20:34.371080 1458839 logs.go:282] 0 containers: []
	W1218 01:20:34.371103 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:34.371121 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:34.371226 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:34.405266 1458839 cri.go:89] found id: ""
	I1218 01:20:34.405340 1458839 logs.go:282] 0 containers: []
	W1218 01:20:34.405377 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:34.405408 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:34.405434 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:34.446038 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:34.446110 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:34.482015 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:34.482092 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:34.516912 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:34.516988 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:34.550792 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:34.550826 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:34.582587 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:34.582614 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:34.622129 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:34.622210 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:34.682912 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:34.682990 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:34.698459 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:34.698541 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:34.780517 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:37.281313 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:37.291482 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:37.291553 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:37.315599 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:37.315621 1458839 cri.go:89] found id: ""
	I1218 01:20:37.315630 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:37.315685 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:37.319443 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:37.319514 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:37.348921 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:37.348943 1458839 cri.go:89] found id: ""
	I1218 01:20:37.348950 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:37.349006 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:37.352654 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:37.352728 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:37.379178 1458839 cri.go:89] found id: ""
	I1218 01:20:37.379203 1458839 logs.go:282] 0 containers: []
	W1218 01:20:37.379212 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:37.379219 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:37.379284 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:37.408022 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:37.408044 1458839 cri.go:89] found id: ""
	I1218 01:20:37.408052 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:37.408108 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:37.411786 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:37.411858 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:37.441192 1458839 cri.go:89] found id: ""
	I1218 01:20:37.441218 1458839 logs.go:282] 0 containers: []
	W1218 01:20:37.441227 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:37.441233 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:37.441338 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:37.476421 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:37.476444 1458839 cri.go:89] found id: ""
	I1218 01:20:37.476452 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:37.476507 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:37.480100 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:37.480183 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:37.506492 1458839 cri.go:89] found id: ""
	I1218 01:20:37.506515 1458839 logs.go:282] 0 containers: []
	W1218 01:20:37.506524 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:37.506531 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:37.506623 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:37.535943 1458839 cri.go:89] found id: ""
	I1218 01:20:37.535969 1458839 logs.go:282] 0 containers: []
	W1218 01:20:37.535978 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:37.535995 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:37.536029 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:37.583723 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:37.583753 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:37.616366 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:37.616401 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:37.651289 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:37.651322 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:37.677159 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:37.677192 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:37.709040 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:37.709082 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:37.768677 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:37.768718 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:37.783664 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:37.783695 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:37.859982 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:37.860005 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:37.860019 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:40.389890 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:40.401870 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:40.401936 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:40.446835 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:40.446854 1458839 cri.go:89] found id: ""
	I1218 01:20:40.446862 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:40.446924 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:40.451277 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:40.451350 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:40.479032 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:40.479050 1458839 cri.go:89] found id: ""
	I1218 01:20:40.479059 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:40.479113 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:40.483275 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:40.483341 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:40.518328 1458839 cri.go:89] found id: ""
	I1218 01:20:40.518349 1458839 logs.go:282] 0 containers: []
	W1218 01:20:40.518357 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:40.518363 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:40.518421 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:40.546780 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:40.546799 1458839 cri.go:89] found id: ""
	I1218 01:20:40.546807 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:40.546863 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:40.552718 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:40.552792 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:40.582992 1458839 cri.go:89] found id: ""
	I1218 01:20:40.583013 1458839 logs.go:282] 0 containers: []
	W1218 01:20:40.583023 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:40.583029 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:40.583094 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:40.612216 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:40.612276 1458839 cri.go:89] found id: ""
	I1218 01:20:40.612308 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:40.612395 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:40.617827 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:40.617947 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:40.648898 1458839 cri.go:89] found id: ""
	I1218 01:20:40.648967 1458839 logs.go:282] 0 containers: []
	W1218 01:20:40.648989 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:40.649006 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:40.649098 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:40.674694 1458839 cri.go:89] found id: ""
	I1218 01:20:40.674760 1458839 logs.go:282] 0 containers: []
	W1218 01:20:40.674785 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:40.674811 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:40.674858 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:40.706052 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:40.706082 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:40.739139 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:40.739182 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:40.787462 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:40.787495 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:40.846081 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:40.846117 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:40.880515 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:40.880547 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:40.940046 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:40.940086 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:40.957580 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:40.957610 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:41.027744 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:41.027812 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:41.027838 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:43.564730 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:43.575473 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:43.575541 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:43.608462 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:43.608481 1458839 cri.go:89] found id: ""
	I1218 01:20:43.608489 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:43.608547 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:43.613101 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:43.613173 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:43.654253 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:43.654276 1458839 cri.go:89] found id: ""
	I1218 01:20:43.654283 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:43.654338 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:43.658508 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:43.658585 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:43.712347 1458839 cri.go:89] found id: ""
	I1218 01:20:43.712374 1458839 logs.go:282] 0 containers: []
	W1218 01:20:43.712382 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:43.712389 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:43.712447 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:43.758850 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:43.758869 1458839 cri.go:89] found id: ""
	I1218 01:20:43.758877 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:43.758934 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:43.762992 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:43.763064 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:43.797930 1458839 cri.go:89] found id: ""
	I1218 01:20:43.797956 1458839 logs.go:282] 0 containers: []
	W1218 01:20:43.797965 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:43.797971 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:43.798041 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:43.826740 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:43.826779 1458839 cri.go:89] found id: ""
	I1218 01:20:43.826789 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:43.826844 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:43.831231 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:43.831307 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:43.865431 1458839 cri.go:89] found id: ""
	I1218 01:20:43.865454 1458839 logs.go:282] 0 containers: []
	W1218 01:20:43.865463 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:43.865470 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:43.865531 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:43.896690 1458839 cri.go:89] found id: ""
	I1218 01:20:43.896713 1458839 logs.go:282] 0 containers: []
	W1218 01:20:43.896722 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:43.896738 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:43.896751 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:43.980011 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:43.980041 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:44.039684 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:44.039710 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:44.136416 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:44.136437 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:44.136450 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:44.178570 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:44.178605 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:44.229698 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:44.229732 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:44.283657 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:44.283691 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:44.316315 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:44.316347 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:44.379177 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:44.379212 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:46.894799 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:46.918461 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:46.918535 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:46.984079 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:46.984097 1458839 cri.go:89] found id: ""
	I1218 01:20:46.984105 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:46.984161 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:46.990076 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:46.990153 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:47.025586 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:47.025604 1458839 cri.go:89] found id: ""
	I1218 01:20:47.025612 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:47.025673 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:47.030159 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:47.030237 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:47.069950 1458839 cri.go:89] found id: ""
	I1218 01:20:47.070035 1458839 logs.go:282] 0 containers: []
	W1218 01:20:47.070059 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:47.070079 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:47.070168 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:47.100164 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:47.100236 1458839 cri.go:89] found id: ""
	I1218 01:20:47.100256 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:47.100348 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:47.104531 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:47.104681 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:47.153214 1458839 cri.go:89] found id: ""
	I1218 01:20:47.153303 1458839 logs.go:282] 0 containers: []
	W1218 01:20:47.153327 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:47.153348 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:47.153461 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:47.190367 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:47.190437 1458839 cri.go:89] found id: ""
	I1218 01:20:47.190459 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:47.190546 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:47.195914 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:47.196044 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:47.234117 1458839 cri.go:89] found id: ""
	I1218 01:20:47.234196 1458839 logs.go:282] 0 containers: []
	W1218 01:20:47.234221 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:47.234242 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:47.234352 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:47.262987 1458839 cri.go:89] found id: ""
	I1218 01:20:47.263052 1458839 logs.go:282] 0 containers: []
	W1218 01:20:47.263085 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:47.263131 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:47.263161 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:47.333946 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:47.333983 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:47.352773 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:47.352802 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:47.396268 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:47.396305 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:47.436968 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:47.436997 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:47.469803 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:47.469835 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:47.557402 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:47.557423 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:47.557436 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:47.610390 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:47.610420 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:47.678831 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:47.678881 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:50.225467 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:50.236420 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:50.236499 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:50.266351 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:50.266373 1458839 cri.go:89] found id: ""
	I1218 01:20:50.266382 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:50.266446 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:50.270295 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:50.270373 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:50.295680 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:50.295705 1458839 cri.go:89] found id: ""
	I1218 01:20:50.295713 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:50.295781 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:50.299765 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:50.299840 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:50.325761 1458839 cri.go:89] found id: ""
	I1218 01:20:50.325789 1458839 logs.go:282] 0 containers: []
	W1218 01:20:50.325798 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:50.325804 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:50.325868 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:50.351266 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:50.351291 1458839 cri.go:89] found id: ""
	I1218 01:20:50.351299 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:50.351357 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:50.355081 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:50.355161 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:50.380885 1458839 cri.go:89] found id: ""
	I1218 01:20:50.380909 1458839 logs.go:282] 0 containers: []
	W1218 01:20:50.380920 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:50.380927 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:50.380988 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:50.407519 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:50.407538 1458839 cri.go:89] found id: ""
	I1218 01:20:50.407546 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:50.407610 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:50.411335 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:50.411407 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:50.437303 1458839 cri.go:89] found id: ""
	I1218 01:20:50.437368 1458839 logs.go:282] 0 containers: []
	W1218 01:20:50.437380 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:50.437387 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:50.437458 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:50.464308 1458839 cri.go:89] found id: ""
	I1218 01:20:50.464381 1458839 logs.go:282] 0 containers: []
	W1218 01:20:50.464427 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:50.464454 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:50.464487 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:50.479198 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:50.479268 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:50.513470 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:50.513504 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:50.553415 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:50.553445 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:50.582983 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:50.583062 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:50.617781 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:50.617823 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:50.673718 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:50.673739 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:50.747753 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:50.747831 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:50.861782 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:50.861815 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:50.861837 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:53.439692 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:53.450144 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:53.450220 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:53.476029 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:53.476052 1458839 cri.go:89] found id: ""
	I1218 01:20:53.476060 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:53.476120 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:53.479811 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:53.479888 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:53.505682 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:53.505706 1458839 cri.go:89] found id: ""
	I1218 01:20:53.505715 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:53.505772 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:53.509539 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:53.509669 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:53.546212 1458839 cri.go:89] found id: ""
	I1218 01:20:53.546238 1458839 logs.go:282] 0 containers: []
	W1218 01:20:53.546252 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:53.546259 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:53.546321 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:53.571515 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:53.571534 1458839 cri.go:89] found id: ""
	I1218 01:20:53.571542 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:53.571598 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:53.575586 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:53.575665 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:53.602038 1458839 cri.go:89] found id: ""
	I1218 01:20:53.602060 1458839 logs.go:282] 0 containers: []
	W1218 01:20:53.602069 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:53.602075 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:53.602135 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:53.629757 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:53.629823 1458839 cri.go:89] found id: ""
	I1218 01:20:53.629855 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:53.630008 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:53.636291 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:53.636390 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:53.665700 1458839 cri.go:89] found id: ""
	I1218 01:20:53.665766 1458839 logs.go:282] 0 containers: []
	W1218 01:20:53.665787 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:53.665809 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:53.665900 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:53.703569 1458839 cri.go:89] found id: ""
	I1218 01:20:53.703604 1458839 logs.go:282] 0 containers: []
	W1218 01:20:53.703613 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:53.703634 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:53.703646 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:53.769736 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:53.769754 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:53.769767 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:53.804201 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:53.804235 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:53.840661 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:53.840692 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:53.875953 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:53.875986 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:53.916049 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:53.916083 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:53.977187 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:53.977224 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:53.992113 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:53.992143 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:54.024185 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:54.024212 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:56.555136 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:56.565097 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:56.565168 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:56.589741 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:56.589764 1458839 cri.go:89] found id: ""
	I1218 01:20:56.589772 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:56.589831 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:56.593526 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:56.593600 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:56.617631 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:56.617697 1458839 cri.go:89] found id: ""
	I1218 01:20:56.617708 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:56.617787 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:56.621536 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:56.621615 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:56.650932 1458839 cri.go:89] found id: ""
	I1218 01:20:56.650966 1458839 logs.go:282] 0 containers: []
	W1218 01:20:56.650975 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:56.650981 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:56.651043 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:56.692300 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:56.692324 1458839 cri.go:89] found id: ""
	I1218 01:20:56.692339 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:56.692403 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:56.697228 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:56.697310 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:56.728191 1458839 cri.go:89] found id: ""
	I1218 01:20:56.728218 1458839 logs.go:282] 0 containers: []
	W1218 01:20:56.728227 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:56.728235 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:56.728299 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:56.753797 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:56.753823 1458839 cri.go:89] found id: ""
	I1218 01:20:56.753832 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:56.753890 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:56.757780 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:56.757854 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:56.786696 1458839 cri.go:89] found id: ""
	I1218 01:20:56.786725 1458839 logs.go:282] 0 containers: []
	W1218 01:20:56.786735 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:56.786742 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:56.786826 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:56.812170 1458839 cri.go:89] found id: ""
	I1218 01:20:56.812198 1458839 logs.go:282] 0 containers: []
	W1218 01:20:56.812208 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:56.812222 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:56.812234 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:56.870582 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:20:56.870618 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:56.905353 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:20:56.905387 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:56.945957 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:56.945995 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:56.985798 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:20:56.985832 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:57.018936 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:20:57.019029 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:20:57.049023 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:20:57.049057 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:20:57.078608 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:20:57.078635 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:20:57.094520 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:20:57.094588 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:20:57.162771 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:20:59.663002 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:20:59.673625 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:20:59.673702 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:20:59.708248 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:20:59.708275 1458839 cri.go:89] found id: ""
	I1218 01:20:59.708283 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:20:59.708344 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:59.713153 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:20:59.713232 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:20:59.740292 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:20:59.740319 1458839 cri.go:89] found id: ""
	I1218 01:20:59.740327 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:20:59.740389 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:59.744288 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:20:59.744373 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:20:59.771714 1458839 cri.go:89] found id: ""
	I1218 01:20:59.771749 1458839 logs.go:282] 0 containers: []
	W1218 01:20:59.771758 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:20:59.771765 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:20:59.771829 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:20:59.798807 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:20:59.798828 1458839 cri.go:89] found id: ""
	I1218 01:20:59.798836 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:20:59.798894 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:59.802700 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:20:59.802776 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:20:59.828884 1458839 cri.go:89] found id: ""
	I1218 01:20:59.828963 1458839 logs.go:282] 0 containers: []
	W1218 01:20:59.829012 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:20:59.829044 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:20:59.829119 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:20:59.855236 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:20:59.855260 1458839 cri.go:89] found id: ""
	I1218 01:20:59.855267 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:20:59.855324 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:20:59.859228 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:20:59.859304 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:20:59.885377 1458839 cri.go:89] found id: ""
	I1218 01:20:59.885404 1458839 logs.go:282] 0 containers: []
	W1218 01:20:59.885414 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:20:59.885420 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:20:59.885483 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:20:59.910213 1458839 cri.go:89] found id: ""
	I1218 01:20:59.910240 1458839 logs.go:282] 0 containers: []
	W1218 01:20:59.910249 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:20:59.910263 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:20:59.910275 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:20:59.972227 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:20:59.972261 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:00.008523 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:00.008572 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:00.214576 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:00.214623 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:00.270278 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:00.270317 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:00.380247 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:00.380318 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:00.380354 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:00.451600 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:00.451681 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:00.498762 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:00.498794 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:00.527009 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:00.527041 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:03.060229 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:03.071073 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:03.071153 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:03.097513 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:03.097535 1458839 cri.go:89] found id: ""
	I1218 01:21:03.097543 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:03.097600 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:03.101428 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:03.101510 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:03.128445 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:03.128468 1458839 cri.go:89] found id: ""
	I1218 01:21:03.128477 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:03.128532 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:03.132358 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:03.132438 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:03.158222 1458839 cri.go:89] found id: ""
	I1218 01:21:03.158246 1458839 logs.go:282] 0 containers: []
	W1218 01:21:03.158255 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:03.158261 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:03.158321 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:03.183963 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:03.184037 1458839 cri.go:89] found id: ""
	I1218 01:21:03.184059 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:03.184133 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:03.187850 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:03.187922 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:03.215001 1458839 cri.go:89] found id: ""
	I1218 01:21:03.215024 1458839 logs.go:282] 0 containers: []
	W1218 01:21:03.215032 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:03.215039 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:03.215105 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:03.244751 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:03.244829 1458839 cri.go:89] found id: ""
	I1218 01:21:03.244851 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:03.244936 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:03.248808 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:03.248880 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:03.274485 1458839 cri.go:89] found id: ""
	I1218 01:21:03.274560 1458839 logs.go:282] 0 containers: []
	W1218 01:21:03.274583 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:03.274600 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:03.274689 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:03.300112 1458839 cri.go:89] found id: ""
	I1218 01:21:03.300188 1458839 logs.go:282] 0 containers: []
	W1218 01:21:03.300211 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:03.300237 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:03.300278 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:03.365011 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:03.365052 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:03.447362 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:03.447390 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:03.447406 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:03.490566 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:03.490600 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:03.518861 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:03.518891 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:03.548783 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:03.548822 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:03.563863 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:03.563902 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:03.598034 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:03.598069 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:03.631595 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:03.631632 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:06.171133 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:06.181329 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:06.181401 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:06.206162 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:06.206184 1458839 cri.go:89] found id: ""
	I1218 01:21:06.206192 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:06.206250 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:06.209907 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:06.209980 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:06.234113 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:06.234138 1458839 cri.go:89] found id: ""
	I1218 01:21:06.234146 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:06.234206 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:06.238128 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:06.238209 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:06.262684 1458839 cri.go:89] found id: ""
	I1218 01:21:06.262708 1458839 logs.go:282] 0 containers: []
	W1218 01:21:06.262716 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:06.262723 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:06.262787 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:06.287735 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:06.287759 1458839 cri.go:89] found id: ""
	I1218 01:21:06.287767 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:06.287824 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:06.291447 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:06.291525 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:06.321769 1458839 cri.go:89] found id: ""
	I1218 01:21:06.321794 1458839 logs.go:282] 0 containers: []
	W1218 01:21:06.321802 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:06.321809 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:06.321894 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:06.347213 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:06.347235 1458839 cri.go:89] found id: ""
	I1218 01:21:06.347244 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:06.347301 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:06.351021 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:06.351096 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:06.377273 1458839 cri.go:89] found id: ""
	I1218 01:21:06.377328 1458839 logs.go:282] 0 containers: []
	W1218 01:21:06.377336 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:06.377342 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:06.377401 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:06.415131 1458839 cri.go:89] found id: ""
	I1218 01:21:06.415156 1458839 logs.go:282] 0 containers: []
	W1218 01:21:06.415164 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:06.415179 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:06.415191 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:06.473173 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:06.473215 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:06.506071 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:06.506106 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:06.545576 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:06.545611 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:06.575483 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:06.575512 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:06.605198 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:06.605230 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:06.636861 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:06.636892 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:06.700820 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:06.700860 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:06.717762 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:06.717791 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:06.782280 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:09.282911 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:09.293591 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:09.293661 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:09.327480 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:09.327503 1458839 cri.go:89] found id: ""
	I1218 01:21:09.327517 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:09.327580 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:09.331272 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:09.331343 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:09.356661 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:09.356682 1458839 cri.go:89] found id: ""
	I1218 01:21:09.356692 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:09.356753 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:09.360964 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:09.361041 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:09.389307 1458839 cri.go:89] found id: ""
	I1218 01:21:09.389333 1458839 logs.go:282] 0 containers: []
	W1218 01:21:09.389342 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:09.389348 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:09.389409 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:09.434333 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:09.434357 1458839 cri.go:89] found id: ""
	I1218 01:21:09.434366 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:09.434426 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:09.438725 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:09.438803 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:09.467605 1458839 cri.go:89] found id: ""
	I1218 01:21:09.467629 1458839 logs.go:282] 0 containers: []
	W1218 01:21:09.467638 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:09.467644 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:09.467734 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:09.492793 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:09.492815 1458839 cri.go:89] found id: ""
	I1218 01:21:09.492824 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:09.492882 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:09.496689 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:09.496783 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:09.522641 1458839 cri.go:89] found id: ""
	I1218 01:21:09.522668 1458839 logs.go:282] 0 containers: []
	W1218 01:21:09.522677 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:09.522684 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:09.522745 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:09.547733 1458839 cri.go:89] found id: ""
	I1218 01:21:09.547758 1458839 logs.go:282] 0 containers: []
	W1218 01:21:09.547767 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:09.547782 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:09.547793 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:09.605665 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:09.605699 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:09.672137 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:09.672162 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:09.672181 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:09.709133 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:09.709171 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:09.741926 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:09.741958 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:09.777833 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:09.777868 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:09.807378 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:09.807404 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:09.822066 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:09.822097 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:09.853127 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:09.853166 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:12.389438 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:12.400378 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:12.400454 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:12.430269 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:12.430289 1458839 cri.go:89] found id: ""
	I1218 01:21:12.430298 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:12.430354 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:12.436176 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:12.436246 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:12.472929 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:12.472950 1458839 cri.go:89] found id: ""
	I1218 01:21:12.472958 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:12.473014 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:12.476676 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:12.476747 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:12.502753 1458839 cri.go:89] found id: ""
	I1218 01:21:12.502779 1458839 logs.go:282] 0 containers: []
	W1218 01:21:12.502788 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:12.502795 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:12.502858 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:12.527108 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:12.527132 1458839 cri.go:89] found id: ""
	I1218 01:21:12.527140 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:12.527195 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:12.530851 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:12.530926 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:12.555109 1458839 cri.go:89] found id: ""
	I1218 01:21:12.555133 1458839 logs.go:282] 0 containers: []
	W1218 01:21:12.555142 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:12.555148 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:12.555208 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:12.588658 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:12.588681 1458839 cri.go:89] found id: ""
	I1218 01:21:12.588690 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:12.588745 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:12.592430 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:12.592503 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:12.618075 1458839 cri.go:89] found id: ""
	I1218 01:21:12.618146 1458839 logs.go:282] 0 containers: []
	W1218 01:21:12.618161 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:12.618168 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:12.618235 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:12.642877 1458839 cri.go:89] found id: ""
	I1218 01:21:12.642902 1458839 logs.go:282] 0 containers: []
	W1218 01:21:12.642912 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:12.642928 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:12.642941 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:12.657461 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:12.657490 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:12.692841 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:12.692874 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:12.719557 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:12.719586 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:12.750383 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:12.750419 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:12.779447 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:12.779475 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:12.838992 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:12.839025 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:12.907342 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:12.907365 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:12.907382 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:12.946487 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:12.946522 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:15.480762 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:15.492235 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:15.492309 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:15.547175 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:15.547207 1458839 cri.go:89] found id: ""
	I1218 01:21:15.547216 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:15.547286 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:15.552127 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:15.552197 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:15.594470 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:15.594505 1458839 cri.go:89] found id: ""
	I1218 01:21:15.594514 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:15.594600 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:15.598828 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:15.598902 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:15.625586 1458839 cri.go:89] found id: ""
	I1218 01:21:15.625612 1458839 logs.go:282] 0 containers: []
	W1218 01:21:15.625621 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:15.625627 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:15.625690 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:15.654460 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:15.654483 1458839 cri.go:89] found id: ""
	I1218 01:21:15.654492 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:15.654584 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:15.658303 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:15.658373 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:15.686975 1458839 cri.go:89] found id: ""
	I1218 01:21:15.687001 1458839 logs.go:282] 0 containers: []
	W1218 01:21:15.687009 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:15.687016 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:15.687077 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:15.712925 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:15.712949 1458839 cri.go:89] found id: ""
	I1218 01:21:15.712957 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:15.713015 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:15.716784 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:15.716858 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:15.741711 1458839 cri.go:89] found id: ""
	I1218 01:21:15.741774 1458839 logs.go:282] 0 containers: []
	W1218 01:21:15.741789 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:15.741797 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:15.741859 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:15.766077 1458839 cri.go:89] found id: ""
	I1218 01:21:15.766102 1458839 logs.go:282] 0 containers: []
	W1218 01:21:15.766111 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:15.766127 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:15.766168 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:15.825059 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:15.825097 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:15.840082 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:15.840121 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:15.918251 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:15.918269 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:15.918286 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:15.956447 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:15.956480 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:15.997965 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:15.998058 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:16.030232 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:16.030259 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:16.061937 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:16.061971 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:16.091632 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:16.091659 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:18.626674 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:18.640278 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:18.640345 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:18.667229 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:18.667254 1458839 cri.go:89] found id: ""
	I1218 01:21:18.667262 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:18.667320 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:18.671356 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:18.671420 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:18.714290 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:18.714313 1458839 cri.go:89] found id: ""
	I1218 01:21:18.714322 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:18.714379 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:18.718479 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:18.718612 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:18.754403 1458839 cri.go:89] found id: ""
	I1218 01:21:18.754475 1458839 logs.go:282] 0 containers: []
	W1218 01:21:18.754498 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:18.754525 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:18.754629 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:18.800877 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:18.800899 1458839 cri.go:89] found id: ""
	I1218 01:21:18.800907 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:18.800963 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:18.804924 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:18.804996 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:18.839374 1458839 cri.go:89] found id: ""
	I1218 01:21:18.839399 1458839 logs.go:282] 0 containers: []
	W1218 01:21:18.839408 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:18.839415 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:18.839475 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:18.871438 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:18.871461 1458839 cri.go:89] found id: ""
	I1218 01:21:18.871470 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:18.871527 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:18.875872 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:18.875945 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:18.912890 1458839 cri.go:89] found id: ""
	I1218 01:21:18.912914 1458839 logs.go:282] 0 containers: []
	W1218 01:21:18.912923 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:18.912929 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:18.912990 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:18.965360 1458839 cri.go:89] found id: ""
	I1218 01:21:18.965385 1458839 logs.go:282] 0 containers: []
	W1218 01:21:18.965394 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:18.965410 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:18.965424 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:18.982849 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:18.982877 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:19.032522 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:19.032557 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:19.093696 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:19.093774 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:19.142422 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:19.142457 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:19.188390 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:19.188420 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:19.235470 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:19.235507 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:19.267401 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:19.267437 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:19.334137 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:19.334180 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:19.414571 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:21.914819 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:21.925882 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:21.925952 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:21.981875 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:21.981914 1458839 cri.go:89] found id: ""
	I1218 01:21:21.981936 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:21.982020 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:21.988782 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:21.988880 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:22.041127 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:22.041152 1458839 cri.go:89] found id: ""
	I1218 01:21:22.041160 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:22.041233 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:22.045392 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:22.045486 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:22.098842 1458839 cri.go:89] found id: ""
	I1218 01:21:22.098868 1458839 logs.go:282] 0 containers: []
	W1218 01:21:22.098877 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:22.098884 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:22.098953 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:22.149294 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:22.149331 1458839 cri.go:89] found id: ""
	I1218 01:21:22.149340 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:22.149410 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:22.161435 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:22.161524 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:22.234907 1458839 cri.go:89] found id: ""
	I1218 01:21:22.234942 1458839 logs.go:282] 0 containers: []
	W1218 01:21:22.234951 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:22.234964 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:22.235040 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:22.273102 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:22.273135 1458839 cri.go:89] found id: ""
	I1218 01:21:22.273144 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:22.273209 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:22.277034 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:22.277121 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:22.310896 1458839 cri.go:89] found id: ""
	I1218 01:21:22.310943 1458839 logs.go:282] 0 containers: []
	W1218 01:21:22.310952 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:22.310959 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:22.311034 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:22.363071 1458839 cri.go:89] found id: ""
	I1218 01:21:22.363123 1458839 logs.go:282] 0 containers: []
	W1218 01:21:22.363134 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:22.363170 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:22.363183 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:22.474477 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:22.474497 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:22.474525 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:22.572495 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:22.572545 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:22.666623 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:22.666673 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:22.749183 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:22.749229 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:22.801503 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:22.801542 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:22.865415 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:22.865454 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:22.983589 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:22.983636 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:23.039276 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:23.039318 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:25.597696 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:25.607884 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:25.607956 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:25.634009 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:25.634032 1458839 cri.go:89] found id: ""
	I1218 01:21:25.634040 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:25.634103 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:25.637786 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:25.637858 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:25.666858 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:25.666886 1458839 cri.go:89] found id: ""
	I1218 01:21:25.666894 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:25.666948 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:25.670581 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:25.670662 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:25.696254 1458839 cri.go:89] found id: ""
	I1218 01:21:25.696278 1458839 logs.go:282] 0 containers: []
	W1218 01:21:25.696287 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:25.696293 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:25.696352 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:25.724244 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:25.724269 1458839 cri.go:89] found id: ""
	I1218 01:21:25.724277 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:25.724338 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:25.728166 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:25.728238 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:25.754273 1458839 cri.go:89] found id: ""
	I1218 01:21:25.754299 1458839 logs.go:282] 0 containers: []
	W1218 01:21:25.754307 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:25.754314 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:25.754374 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:25.779645 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:25.779673 1458839 cri.go:89] found id: ""
	I1218 01:21:25.779682 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:25.779763 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:25.783953 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:25.784025 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:25.809245 1458839 cri.go:89] found id: ""
	I1218 01:21:25.809271 1458839 logs.go:282] 0 containers: []
	W1218 01:21:25.809279 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:25.809286 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:25.809365 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:25.838604 1458839 cri.go:89] found id: ""
	I1218 01:21:25.838630 1458839 logs.go:282] 0 containers: []
	W1218 01:21:25.838638 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:25.838652 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:25.838663 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:25.897363 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:25.897400 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:25.966006 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:25.966027 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:25.966040 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:25.998460 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:25.998494 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:26.028753 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:26.028785 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:26.061168 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:26.061199 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:26.076783 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:26.076818 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:26.115932 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:26.115965 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:26.166154 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:26.166188 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:28.700381 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:28.711161 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:28.711234 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:28.741529 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:28.741553 1458839 cri.go:89] found id: ""
	I1218 01:21:28.741562 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:28.741622 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:28.745579 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:28.745657 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:28.770867 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:28.770901 1458839 cri.go:89] found id: ""
	I1218 01:21:28.770910 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:28.770967 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:28.774667 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:28.774767 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:28.801133 1458839 cri.go:89] found id: ""
	I1218 01:21:28.801167 1458839 logs.go:282] 0 containers: []
	W1218 01:21:28.801177 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:28.801184 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:28.801253 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:28.827199 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:28.827223 1458839 cri.go:89] found id: ""
	I1218 01:21:28.827234 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:28.827291 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:28.831298 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:28.831377 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:28.856422 1458839 cri.go:89] found id: ""
	I1218 01:21:28.856458 1458839 logs.go:282] 0 containers: []
	W1218 01:21:28.856468 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:28.856474 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:28.856542 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:28.886718 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:28.886741 1458839 cri.go:89] found id: ""
	I1218 01:21:28.886749 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:28.886808 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:28.890967 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:28.891044 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:28.919008 1458839 cri.go:89] found id: ""
	I1218 01:21:28.919081 1458839 logs.go:282] 0 containers: []
	W1218 01:21:28.919106 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:28.919125 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:28.919211 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:28.947030 1458839 cri.go:89] found id: ""
	I1218 01:21:28.947053 1458839 logs.go:282] 0 containers: []
	W1218 01:21:28.947061 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:28.947076 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:28.947088 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:28.981220 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:28.981255 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:29.020018 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:29.020051 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:29.054853 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:29.054886 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:29.096892 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:29.096922 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:29.174699 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:29.174723 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:29.174736 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:29.209347 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:29.209376 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:29.240249 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:29.240289 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:29.302438 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:29.302481 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:31.818284 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:31.828593 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:31.828696 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:31.855329 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:31.855349 1458839 cri.go:89] found id: ""
	I1218 01:21:31.855357 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:31.855414 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:31.859364 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:31.859442 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:31.885169 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:31.885193 1458839 cri.go:89] found id: ""
	I1218 01:21:31.885202 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:31.885260 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:31.889164 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:31.889253 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:31.916709 1458839 cri.go:89] found id: ""
	I1218 01:21:31.916733 1458839 logs.go:282] 0 containers: []
	W1218 01:21:31.916742 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:31.916748 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:31.916806 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:31.948384 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:31.948406 1458839 cri.go:89] found id: ""
	I1218 01:21:31.948414 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:31.948472 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:31.952296 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:31.952370 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:31.977574 1458839 cri.go:89] found id: ""
	I1218 01:21:31.977600 1458839 logs.go:282] 0 containers: []
	W1218 01:21:31.977608 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:31.977615 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:31.977701 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:32.007285 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:32.007310 1458839 cri.go:89] found id: ""
	I1218 01:21:32.007319 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:32.007416 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:32.011629 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:32.011775 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:32.037715 1458839 cri.go:89] found id: ""
	I1218 01:21:32.037791 1458839 logs.go:282] 0 containers: []
	W1218 01:21:32.037817 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:32.037831 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:32.037897 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:32.063764 1458839 cri.go:89] found id: ""
	I1218 01:21:32.063790 1458839 logs.go:282] 0 containers: []
	W1218 01:21:32.063799 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:32.063813 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:32.063825 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:32.122578 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:32.122613 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:32.138123 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:32.138152 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:32.188483 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:32.188517 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:32.231572 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:32.231606 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:32.259468 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:32.259497 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:32.326318 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:32.326339 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:32.326352 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:32.364799 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:32.364832 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:32.395628 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:32.395663 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:34.925684 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:34.938333 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:34.938411 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:34.976912 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:34.976937 1458839 cri.go:89] found id: ""
	I1218 01:21:34.976945 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:34.977006 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:34.986464 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:34.986536 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:35.025311 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:35.025338 1458839 cri.go:89] found id: ""
	I1218 01:21:35.025348 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:35.025415 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:35.029757 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:35.029836 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:35.074837 1458839 cri.go:89] found id: ""
	I1218 01:21:35.074861 1458839 logs.go:282] 0 containers: []
	W1218 01:21:35.074871 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:35.074877 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:35.074939 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:35.104486 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:35.104513 1458839 cri.go:89] found id: ""
	I1218 01:21:35.104521 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:35.104585 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:35.109244 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:35.109321 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:35.139542 1458839 cri.go:89] found id: ""
	I1218 01:21:35.139568 1458839 logs.go:282] 0 containers: []
	W1218 01:21:35.139577 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:35.139584 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:35.139652 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:35.234034 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:35.234060 1458839 cri.go:89] found id: ""
	I1218 01:21:35.234069 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:35.234127 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:35.238682 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:35.238763 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:35.267502 1458839 cri.go:89] found id: ""
	I1218 01:21:35.267528 1458839 logs.go:282] 0 containers: []
	W1218 01:21:35.267536 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:35.267543 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:35.267602 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:35.303622 1458839 cri.go:89] found id: ""
	I1218 01:21:35.303649 1458839 logs.go:282] 0 containers: []
	W1218 01:21:35.303658 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:35.303672 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:35.303685 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:35.369886 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:35.369923 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:35.386314 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:35.386346 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:35.474399 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:35.474422 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:35.474435 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:35.537602 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:35.537642 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:35.582249 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:35.582278 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:35.615353 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:35.615393 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:35.651328 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:35.651359 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:35.686738 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:35.686773 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:38.221849 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:38.234740 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:38.234806 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:38.264614 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:38.264663 1458839 cri.go:89] found id: ""
	I1218 01:21:38.264671 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:38.264728 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:38.272903 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:38.272984 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:38.332500 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:38.332518 1458839 cri.go:89] found id: ""
	I1218 01:21:38.332526 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:38.332590 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:38.337085 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:38.337203 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:38.366908 1458839 cri.go:89] found id: ""
	I1218 01:21:38.366931 1458839 logs.go:282] 0 containers: []
	W1218 01:21:38.366939 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:38.366945 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:38.367002 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:38.395099 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:38.395118 1458839 cri.go:89] found id: ""
	I1218 01:21:38.395126 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:38.395182 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:38.399619 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:38.399685 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:38.439744 1458839 cri.go:89] found id: ""
	I1218 01:21:38.439767 1458839 logs.go:282] 0 containers: []
	W1218 01:21:38.439775 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:38.439781 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:38.439845 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:38.467765 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:38.467835 1458839 cri.go:89] found id: ""
	I1218 01:21:38.467856 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:38.467945 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:38.472403 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:38.472538 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:38.505942 1458839 cri.go:89] found id: ""
	I1218 01:21:38.505973 1458839 logs.go:282] 0 containers: []
	W1218 01:21:38.505982 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:38.505988 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:38.506053 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:38.539700 1458839 cri.go:89] found id: ""
	I1218 01:21:38.539724 1458839 logs.go:282] 0 containers: []
	W1218 01:21:38.539733 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:38.539747 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:38.539759 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:38.581294 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:38.581367 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:38.612215 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:38.612253 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:38.663500 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:38.663533 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:38.678676 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:38.678706 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:38.717103 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:38.717277 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:38.780738 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:38.780775 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:38.854051 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:38.854114 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:38.854151 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:38.911577 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:38.911652 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:41.462513 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:41.473180 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:41.473251 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:41.498985 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:41.499008 1458839 cri.go:89] found id: ""
	I1218 01:21:41.499016 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:41.499072 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:41.502766 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:41.502842 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:41.528116 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:41.528139 1458839 cri.go:89] found id: ""
	I1218 01:21:41.528148 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:41.528206 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:41.531895 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:41.531976 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:41.565162 1458839 cri.go:89] found id: ""
	I1218 01:21:41.565186 1458839 logs.go:282] 0 containers: []
	W1218 01:21:41.565195 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:41.565201 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:41.565265 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:41.591389 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:41.591412 1458839 cri.go:89] found id: ""
	I1218 01:21:41.591421 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:41.591479 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:41.595280 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:41.595356 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:41.620689 1458839 cri.go:89] found id: ""
	I1218 01:21:41.620721 1458839 logs.go:282] 0 containers: []
	W1218 01:21:41.620730 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:41.620736 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:41.620806 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:41.646486 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:41.646512 1458839 cri.go:89] found id: ""
	I1218 01:21:41.646521 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:41.646599 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:41.650552 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:41.650738 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:41.680377 1458839 cri.go:89] found id: ""
	I1218 01:21:41.680444 1458839 logs.go:282] 0 containers: []
	W1218 01:21:41.680468 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:41.680486 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:41.680579 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:41.704836 1458839 cri.go:89] found id: ""
	I1218 01:21:41.704916 1458839 logs.go:282] 0 containers: []
	W1218 01:21:41.704939 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:41.704980 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:41.705009 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:41.763729 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:41.763762 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:41.778581 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:41.778610 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:41.847009 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:41.847029 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:41.847043 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:41.877636 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:41.877669 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:41.920997 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:41.921034 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:41.959867 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:41.959893 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:41.990024 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:41.990054 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:42.028398 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:42.028461 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:44.562483 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:44.573190 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:44.573261 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:44.616089 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:44.616113 1458839 cri.go:89] found id: ""
	I1218 01:21:44.616121 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:44.616183 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:44.621391 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:44.621471 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:44.661010 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:44.661036 1458839 cri.go:89] found id: ""
	I1218 01:21:44.661044 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:44.661098 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:44.665261 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:44.665365 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:44.694979 1458839 cri.go:89] found id: ""
	I1218 01:21:44.695006 1458839 logs.go:282] 0 containers: []
	W1218 01:21:44.695015 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:44.695021 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:44.695090 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:44.724102 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:44.724126 1458839 cri.go:89] found id: ""
	I1218 01:21:44.724135 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:44.724196 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:44.728382 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:44.728458 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:44.755674 1458839 cri.go:89] found id: ""
	I1218 01:21:44.755702 1458839 logs.go:282] 0 containers: []
	W1218 01:21:44.755711 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:44.755717 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:44.755776 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:44.790392 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:44.790418 1458839 cri.go:89] found id: ""
	I1218 01:21:44.790426 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:44.790525 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:44.797097 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:44.797204 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:44.839316 1458839 cri.go:89] found id: ""
	I1218 01:21:44.839339 1458839 logs.go:282] 0 containers: []
	W1218 01:21:44.839347 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:44.839371 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:44.839439 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:44.873276 1458839 cri.go:89] found id: ""
	I1218 01:21:44.873316 1458839 logs.go:282] 0 containers: []
	W1218 01:21:44.873326 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:44.873358 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:44.873382 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:44.945604 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:44.945670 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:44.967089 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:44.967160 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:44.996112 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:44.996145 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:45.100592 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:45.102260 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:45.163799 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:45.163833 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:45.269266 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:45.269352 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:45.269377 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:45.321061 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:45.321098 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:45.356156 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:45.356187 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:47.896477 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:47.916272 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:47.916347 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:47.961726 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:47.961746 1458839 cri.go:89] found id: ""
	I1218 01:21:47.961754 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:47.961812 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:47.972867 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:47.972940 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:48.051747 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:48.051770 1458839 cri.go:89] found id: ""
	I1218 01:21:48.051778 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:48.051841 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:48.056328 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:48.056428 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:48.094838 1458839 cri.go:89] found id: ""
	I1218 01:21:48.094861 1458839 logs.go:282] 0 containers: []
	W1218 01:21:48.094870 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:48.094876 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:48.094944 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:48.134296 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:48.134315 1458839 cri.go:89] found id: ""
	I1218 01:21:48.134323 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:48.134381 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:48.138864 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:48.138935 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:48.187551 1458839 cri.go:89] found id: ""
	I1218 01:21:48.187573 1458839 logs.go:282] 0 containers: []
	W1218 01:21:48.187581 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:48.187588 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:48.187649 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:48.227156 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:48.227176 1458839 cri.go:89] found id: ""
	I1218 01:21:48.227184 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:48.227242 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:48.231567 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:48.231639 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:48.265470 1458839 cri.go:89] found id: ""
	I1218 01:21:48.265493 1458839 logs.go:282] 0 containers: []
	W1218 01:21:48.265502 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:48.265508 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:48.265568 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:48.301122 1458839 cri.go:89] found id: ""
	I1218 01:21:48.301144 1458839 logs.go:282] 0 containers: []
	W1218 01:21:48.301152 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:48.301166 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:48.301177 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:48.317126 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:48.317152 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:48.409041 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:48.409065 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:48.409077 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:48.464275 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:48.464361 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:48.513517 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:48.513591 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:48.579530 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:48.579568 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:48.611770 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:48.611804 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:48.639405 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:48.639436 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:48.674642 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:48.674678 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:51.220191 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:51.231812 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:51.231887 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:51.266980 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:51.267001 1458839 cri.go:89] found id: ""
	I1218 01:21:51.267009 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:51.267063 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:51.271165 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:51.271241 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:51.309834 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:51.309856 1458839 cri.go:89] found id: ""
	I1218 01:21:51.309864 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:51.309922 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:51.314323 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:51.314396 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:51.342619 1458839 cri.go:89] found id: ""
	I1218 01:21:51.342645 1458839 logs.go:282] 0 containers: []
	W1218 01:21:51.342654 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:51.342661 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:51.342722 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:51.370347 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:51.370369 1458839 cri.go:89] found id: ""
	I1218 01:21:51.370377 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:51.370441 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:51.376079 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:51.376152 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:51.405156 1458839 cri.go:89] found id: ""
	I1218 01:21:51.405181 1458839 logs.go:282] 0 containers: []
	W1218 01:21:51.405189 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:51.405196 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:51.405255 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:51.436185 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:51.436209 1458839 cri.go:89] found id: ""
	I1218 01:21:51.436219 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:51.436278 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:51.443710 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:51.443783 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:51.488282 1458839 cri.go:89] found id: ""
	I1218 01:21:51.488303 1458839 logs.go:282] 0 containers: []
	W1218 01:21:51.488312 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:51.488318 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:51.488376 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:51.522834 1458839 cri.go:89] found id: ""
	I1218 01:21:51.522855 1458839 logs.go:282] 0 containers: []
	W1218 01:21:51.522864 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:51.522877 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:51.522889 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:51.542053 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:51.542082 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:51.636481 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:51.636510 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:51.636524 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:51.698793 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:51.698827 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:51.746508 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:51.746534 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:51.777602 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:51.777627 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:51.810150 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:51.810180 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:51.881646 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:51.881731 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:51.937622 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:51.937657 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:54.484653 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:54.494891 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:54.494964 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:54.524194 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:54.524216 1458839 cri.go:89] found id: ""
	I1218 01:21:54.524224 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:54.524281 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:54.528106 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:54.528179 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:54.557825 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:54.557847 1458839 cri.go:89] found id: ""
	I1218 01:21:54.557855 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:54.557915 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:54.561680 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:54.561754 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:54.586258 1458839 cri.go:89] found id: ""
	I1218 01:21:54.586283 1458839 logs.go:282] 0 containers: []
	W1218 01:21:54.586293 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:54.586299 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:54.586360 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:54.615159 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:54.615183 1458839 cri.go:89] found id: ""
	I1218 01:21:54.615192 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:54.615250 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:54.618942 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:54.619020 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:54.646351 1458839 cri.go:89] found id: ""
	I1218 01:21:54.646373 1458839 logs.go:282] 0 containers: []
	W1218 01:21:54.646383 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:54.646389 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:54.646445 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:54.684247 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:54.684270 1458839 cri.go:89] found id: ""
	I1218 01:21:54.684278 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:54.684335 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:54.688750 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:54.688824 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:54.717934 1458839 cri.go:89] found id: ""
	I1218 01:21:54.717967 1458839 logs.go:282] 0 containers: []
	W1218 01:21:54.717976 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:54.717982 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:54.718042 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:54.756906 1458839 cri.go:89] found id: ""
	I1218 01:21:54.756929 1458839 logs.go:282] 0 containers: []
	W1218 01:21:54.756937 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:54.756950 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:54.756962 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:54.826376 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:54.826412 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:21:54.872692 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:54.872732 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:54.938097 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:54.938136 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:54.954765 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:54.954795 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:55.056575 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:55.056593 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:55.056609 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:55.091282 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:55.091375 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:55.132371 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:55.132456 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:55.160118 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:55.160205 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:57.716594 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:21:57.726951 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:21:57.727021 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:21:57.754997 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:57.755016 1458839 cri.go:89] found id: ""
	I1218 01:21:57.755025 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:21:57.755080 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:57.758815 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:21:57.758887 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:21:57.785153 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:57.785176 1458839 cri.go:89] found id: ""
	I1218 01:21:57.785185 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:21:57.785243 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:57.788974 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:21:57.789051 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:21:57.814019 1458839 cri.go:89] found id: ""
	I1218 01:21:57.814042 1458839 logs.go:282] 0 containers: []
	W1218 01:21:57.814051 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:21:57.814057 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:21:57.814122 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:21:57.839785 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:57.839817 1458839 cri.go:89] found id: ""
	I1218 01:21:57.839825 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:21:57.839881 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:57.843539 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:21:57.843609 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:21:57.869731 1458839 cri.go:89] found id: ""
	I1218 01:21:57.869754 1458839 logs.go:282] 0 containers: []
	W1218 01:21:57.869763 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:21:57.869768 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:21:57.869827 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:21:57.902852 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:57.902872 1458839 cri.go:89] found id: ""
	I1218 01:21:57.902880 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:21:57.902938 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:21:57.906755 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:21:57.906824 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:21:57.939769 1458839 cri.go:89] found id: ""
	I1218 01:21:57.939798 1458839 logs.go:282] 0 containers: []
	W1218 01:21:57.939807 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:21:57.939814 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:21:57.939873 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:21:57.965450 1458839 cri.go:89] found id: ""
	I1218 01:21:57.965477 1458839 logs.go:282] 0 containers: []
	W1218 01:21:57.965486 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:21:57.965501 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:21:57.965512 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:21:58.015738 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:21:58.015772 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:21:58.078247 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:21:58.078285 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:21:58.151500 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:21:58.151523 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:21:58.151536 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:21:58.184221 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:21:58.184253 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:21:58.220103 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:21:58.220141 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:21:58.250538 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:21:58.250567 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:21:58.267194 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:21:58.267224 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:21:58.316567 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:21:58.316602 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:22:00.849975 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:22:00.860712 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:22:00.860785 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:22:00.887619 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:22:00.887650 1458839 cri.go:89] found id: ""
	I1218 01:22:00.887660 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:22:00.887720 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:00.891694 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:22:00.891775 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:22:00.918308 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:22:00.918330 1458839 cri.go:89] found id: ""
	I1218 01:22:00.918338 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:22:00.918402 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:00.922880 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:22:00.922955 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:22:00.951760 1458839 cri.go:89] found id: ""
	I1218 01:22:00.951785 1458839 logs.go:282] 0 containers: []
	W1218 01:22:00.951793 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:22:00.951800 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:22:00.951862 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:22:00.981382 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:22:00.981404 1458839 cri.go:89] found id: ""
	I1218 01:22:00.981414 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:22:00.981478 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:00.985469 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:22:00.985547 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:22:01.011452 1458839 cri.go:89] found id: ""
	I1218 01:22:01.011480 1458839 logs.go:282] 0 containers: []
	W1218 01:22:01.011490 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:22:01.011497 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:22:01.011562 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:22:01.038619 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:22:01.038643 1458839 cri.go:89] found id: ""
	I1218 01:22:01.038651 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:22:01.038739 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:01.042551 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:22:01.042631 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:22:01.067764 1458839 cri.go:89] found id: ""
	I1218 01:22:01.067787 1458839 logs.go:282] 0 containers: []
	W1218 01:22:01.067796 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:22:01.067802 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:22:01.067866 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:22:01.095165 1458839 cri.go:89] found id: ""
	I1218 01:22:01.095190 1458839 logs.go:282] 0 containers: []
	W1218 01:22:01.095198 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:22:01.095212 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:22:01.095232 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:22:01.134860 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:22:01.134893 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:22:01.163034 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:22:01.163066 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:22:01.202307 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:22:01.202336 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:22:01.236120 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:22:01.236154 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:22:01.272962 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:22:01.272998 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:22:01.303432 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:22:01.303466 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:22:01.367156 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:22:01.367193 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:22:01.382527 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:22:01.382555 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:22:01.467406 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:22:03.968819 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:22:03.979321 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:22:03.979393 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:22:04.009990 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:22:04.010019 1458839 cri.go:89] found id: ""
	I1218 01:22:04.010029 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:22:04.010103 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:04.014383 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:22:04.014464 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:22:04.042938 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:22:04.042961 1458839 cri.go:89] found id: ""
	I1218 01:22:04.042970 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:22:04.043033 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:04.047101 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:22:04.047180 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:22:04.077002 1458839 cri.go:89] found id: ""
	I1218 01:22:04.077026 1458839 logs.go:282] 0 containers: []
	W1218 01:22:04.077035 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:22:04.077042 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:22:04.077105 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:22:04.107016 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:22:04.107040 1458839 cri.go:89] found id: ""
	I1218 01:22:04.107049 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:22:04.107107 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:04.110867 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:22:04.110943 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:22:04.135386 1458839 cri.go:89] found id: ""
	I1218 01:22:04.135411 1458839 logs.go:282] 0 containers: []
	W1218 01:22:04.135419 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:22:04.135425 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:22:04.135485 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:22:04.165177 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:22:04.165202 1458839 cri.go:89] found id: ""
	I1218 01:22:04.165210 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:22:04.165293 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:04.169107 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:22:04.169191 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:22:04.195209 1458839 cri.go:89] found id: ""
	I1218 01:22:04.195234 1458839 logs.go:282] 0 containers: []
	W1218 01:22:04.195244 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:22:04.195251 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:22:04.195313 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:22:04.223523 1458839 cri.go:89] found id: ""
	I1218 01:22:04.223550 1458839 logs.go:282] 0 containers: []
	W1218 01:22:04.223570 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:22:04.223588 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:22:04.223605 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:22:04.269639 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:22:04.269673 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:22:04.299034 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:22:04.299127 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:22:04.327688 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:22:04.327719 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:22:04.386649 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:22:04.386685 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:22:04.475874 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:22:04.475901 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:22:04.475915 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:22:04.510513 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:22:04.510543 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:22:04.541595 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:22:04.541636 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:22:04.556346 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:22:04.556376 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:22:07.089082 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:22:07.099496 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:22:07.099566 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:22:07.127469 1458839 cri.go:89] found id: "0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:22:07.127492 1458839 cri.go:89] found id: ""
	I1218 01:22:07.127501 1458839 logs.go:282] 1 containers: [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322]
	I1218 01:22:07.127559 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:07.131198 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:22:07.131273 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:22:07.159381 1458839 cri.go:89] found id: "f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:22:07.159404 1458839 cri.go:89] found id: ""
	I1218 01:22:07.159413 1458839 logs.go:282] 1 containers: [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306]
	I1218 01:22:07.159476 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:07.163499 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:22:07.163573 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:22:07.188445 1458839 cri.go:89] found id: ""
	I1218 01:22:07.188474 1458839 logs.go:282] 0 containers: []
	W1218 01:22:07.188482 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:22:07.188489 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:22:07.188613 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:22:07.215768 1458839 cri.go:89] found id: "2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:22:07.215792 1458839 cri.go:89] found id: ""
	I1218 01:22:07.215801 1458839 logs.go:282] 1 containers: [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa]
	I1218 01:22:07.215884 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:07.219737 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:22:07.219812 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:22:07.247812 1458839 cri.go:89] found id: ""
	I1218 01:22:07.247836 1458839 logs.go:282] 0 containers: []
	W1218 01:22:07.247846 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:22:07.247852 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:22:07.247914 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:22:07.274226 1458839 cri.go:89] found id: "a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:22:07.274254 1458839 cri.go:89] found id: ""
	I1218 01:22:07.274263 1458839 logs.go:282] 1 containers: [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd]
	I1218 01:22:07.274322 1458839 ssh_runner.go:195] Run: which crictl
	I1218 01:22:07.278183 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:22:07.278279 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:22:07.303611 1458839 cri.go:89] found id: ""
	I1218 01:22:07.303639 1458839 logs.go:282] 0 containers: []
	W1218 01:22:07.303656 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:22:07.303663 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:22:07.303735 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:22:07.335358 1458839 cri.go:89] found id: ""
	I1218 01:22:07.335385 1458839 logs.go:282] 0 containers: []
	W1218 01:22:07.335403 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:22:07.335420 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:22:07.335435 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:22:07.364580 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:22:07.364617 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:22:07.425592 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:22:07.425696 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:22:07.445386 1458839 logs.go:123] Gathering logs for kube-apiserver [0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322] ...
	I1218 01:22:07.445467 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322"
	I1218 01:22:07.485799 1458839 logs.go:123] Gathering logs for etcd [f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306] ...
	I1218 01:22:07.485837 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306"
	I1218 01:22:07.519082 1458839 logs.go:123] Gathering logs for kube-controller-manager [a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd] ...
	I1218 01:22:07.519165 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd"
	I1218 01:22:07.556211 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:22:07.556240 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:22:07.587259 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:22:07.587292 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:22:07.650108 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:22:07.650187 1458839 logs.go:123] Gathering logs for kube-scheduler [2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa] ...
	I1218 01:22:07.650217 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa"
	I1218 01:22:10.187784 1458839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:22:10.198693 1458839 kubeadm.go:602] duration metric: took 4m3.380935243s to restartPrimaryControlPlane
	W1218 01:22:10.198766 1458839 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1218 01:22:10.198834 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 01:22:10.678007 1458839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:22:10.691911 1458839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:22:10.700454 1458839 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:22:10.700525 1458839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:22:10.708458 1458839 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:22:10.708477 1458839 kubeadm.go:158] found existing configuration files:
	
	I1218 01:22:10.708551 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:22:10.716560 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:22:10.716683 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:22:10.724617 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:22:10.732828 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:22:10.732895 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:22:10.743151 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:22:10.752296 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:22:10.752361 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:22:10.760013 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:22:10.768011 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:22:10.768087 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:22:10.775742 1458839 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:22:10.897986 1458839 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:22:10.898405 1458839 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:22:10.975074 1458839 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:26:22.899450 1458839 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:26:22.899483 1458839 kubeadm.go:319] 
	I1218 01:26:22.899552 1458839 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:26:22.903498 1458839 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:26:22.903559 1458839 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:26:22.903649 1458839 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:26:22.903705 1458839 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:26:22.903740 1458839 kubeadm.go:319] OS: Linux
	I1218 01:26:22.903785 1458839 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:26:22.903833 1458839 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:26:22.903880 1458839 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:26:22.903928 1458839 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:26:22.903976 1458839 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:26:22.904024 1458839 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:26:22.904070 1458839 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:26:22.904118 1458839 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:26:22.904165 1458839 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:26:22.904238 1458839 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:26:22.904333 1458839 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:26:22.904422 1458839 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:26:22.904484 1458839 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:26:22.907974 1458839 out.go:252]   - Generating certificates and keys ...
	I1218 01:26:22.908065 1458839 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:26:22.908130 1458839 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:26:22.908206 1458839 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 01:26:22.908267 1458839 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 01:26:22.908336 1458839 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 01:26:22.908390 1458839 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 01:26:22.908452 1458839 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 01:26:22.908514 1458839 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 01:26:22.908588 1458839 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 01:26:22.908700 1458839 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 01:26:22.908740 1458839 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 01:26:22.908795 1458839 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:26:22.908845 1458839 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:26:22.908900 1458839 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:26:22.908957 1458839 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:26:22.909020 1458839 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:26:22.909074 1458839 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:26:22.909158 1458839 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:26:22.909224 1458839 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:26:22.912883 1458839 out.go:252]   - Booting up control plane ...
	I1218 01:26:22.913057 1458839 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:26:22.913198 1458839 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:26:22.913326 1458839 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:26:22.913498 1458839 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:26:22.913644 1458839 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:26:22.913808 1458839 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:26:22.913941 1458839 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:26:22.914016 1458839 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:26:22.914207 1458839 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:26:22.914323 1458839 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:26:22.914393 1458839 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001331663s
	I1218 01:26:22.914397 1458839 kubeadm.go:319] 
	I1218 01:26:22.914457 1458839 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:26:22.914491 1458839 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:26:22.914602 1458839 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:26:22.914606 1458839 kubeadm.go:319] 
	I1218 01:26:22.914718 1458839 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:26:22.914753 1458839 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:26:22.914785 1458839 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1218 01:26:22.914894 1458839 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001331663s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001331663s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 01:26:22.914968 1458839 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 01:26:22.915397 1458839 kubeadm.go:319] 
	I1218 01:26:23.354216 1458839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:26:23.368008 1458839 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:26:23.368070 1458839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:26:23.378900 1458839 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:26:23.378932 1458839 kubeadm.go:158] found existing configuration files:
	
	I1218 01:26:23.378986 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:26:23.395840 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:26:23.395921 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:26:23.413166 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:26:23.428150 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:26:23.428213 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:26:23.438196 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:26:23.447181 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:26:23.447292 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:26:23.459203 1458839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:26:23.467984 1458839 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:26:23.468104 1458839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:26:23.477156 1458839 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:26:23.523502 1458839 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:26:23.523563 1458839 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:26:23.617045 1458839 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:26:23.617122 1458839 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:26:23.617160 1458839 kubeadm.go:319] OS: Linux
	I1218 01:26:23.617209 1458839 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:26:23.617260 1458839 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:26:23.617311 1458839 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:26:23.617370 1458839 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:26:23.617422 1458839 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:26:23.617475 1458839 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:26:23.617524 1458839 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:26:23.617575 1458839 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:26:23.617625 1458839 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:26:23.695152 1458839 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:26:23.695267 1458839 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:26:23.695363 1458839 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:26:23.702447 1458839 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:26:23.706852 1458839 out.go:252]   - Generating certificates and keys ...
	I1218 01:26:23.706946 1458839 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:26:23.707014 1458839 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:26:23.707099 1458839 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 01:26:23.707164 1458839 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 01:26:23.707237 1458839 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 01:26:23.707295 1458839 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 01:26:23.707360 1458839 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 01:26:23.707424 1458839 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 01:26:23.707607 1458839 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 01:26:23.707814 1458839 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 01:26:23.708213 1458839 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 01:26:23.708395 1458839 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:26:24.304974 1458839 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:26:24.484977 1458839 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:26:24.924981 1458839 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:26:25.040086 1458839 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:26:25.108304 1458839 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:26:25.108403 1458839 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:26:25.108471 1458839 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:26:25.112324 1458839 out.go:252]   - Booting up control plane ...
	I1218 01:26:25.112436 1458839 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:26:25.112513 1458839 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:26:25.112588 1458839 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:26:25.151027 1458839 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:26:25.151139 1458839 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:26:25.177136 1458839 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:26:25.177236 1458839 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:26:25.177276 1458839 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:26:25.372300 1458839 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:26:25.372420 1458839 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:30:25.372983 1458839 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001015673s
	I1218 01:30:25.373274 1458839 kubeadm.go:319] 
	I1218 01:30:25.373345 1458839 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:30:25.373379 1458839 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:30:25.373484 1458839 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:30:25.373489 1458839 kubeadm.go:319] 
	I1218 01:30:25.373594 1458839 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:30:25.373626 1458839 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:30:25.373667 1458839 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:30:25.373672 1458839 kubeadm.go:319] 
	I1218 01:30:25.377741 1458839 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:30:25.378176 1458839 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:30:25.378286 1458839 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:30:25.378548 1458839 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 01:30:25.378553 1458839 kubeadm.go:319] 
	I1218 01:30:25.378622 1458839 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:30:25.378677 1458839 kubeadm.go:403] duration metric: took 12m18.62298779s to StartCluster
	I1218 01:30:25.378710 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:30:25.378773 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:30:25.425001 1458839 cri.go:89] found id: ""
	I1218 01:30:25.425025 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.425035 1458839 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:30:25.425041 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:30:25.425099 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:30:25.487693 1458839 cri.go:89] found id: ""
	I1218 01:30:25.487715 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.487723 1458839 logs.go:284] No container was found matching "etcd"
	I1218 01:30:25.487730 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:30:25.487855 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:30:25.530922 1458839 cri.go:89] found id: ""
	I1218 01:30:25.530945 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.530953 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:30:25.530959 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:30:25.531024 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:30:25.580163 1458839 cri.go:89] found id: ""
	I1218 01:30:25.580196 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.580205 1458839 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:30:25.580218 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:30:25.580290 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:30:25.611616 1458839 cri.go:89] found id: ""
	I1218 01:30:25.611643 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.611652 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:30:25.611658 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:30:25.611717 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:30:25.651571 1458839 cri.go:89] found id: ""
	I1218 01:30:25.651598 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.651607 1458839 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:30:25.651614 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:30:25.651673 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:30:25.702487 1458839 cri.go:89] found id: ""
	I1218 01:30:25.702511 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.702520 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:30:25.702526 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:30:25.702590 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:30:25.751157 1458839 cri.go:89] found id: ""
	I1218 01:30:25.751182 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.751191 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:30:25.751201 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:30:25.751213 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:30:25.823563 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:30:25.823665 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:30:25.842924 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:30:25.842956 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:30:25.952149 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:30:25.952222 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:30:25.952261 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:30:26.014471 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:30:26.014555 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 01:30:26.062166 1458839 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:30:26.062211 1458839 out.go:285] * 
	* 
	W1218 01:30:26.062262 1458839 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:30:26.062273 1458839 out.go:285] * 
	* 
	W1218 01:30:26.064418 1458839 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:30:26.069406 1458839 out.go:203] 
	W1218 01:30:26.071604 1458839 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:30:26.071667 1458839 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:30:26.071689 1458839 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:30:26.074955 1458839 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-675544 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-675544 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-675544 version --output=json: exit status 1 (118.348117ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-18 01:30:27.036864096 +0000 UTC m=+4776.970582119
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-675544
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-675544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee",
	        "Created": "2025-12-18T01:17:18.416917994Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1458970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:17:51.212289616Z",
	            "FinishedAt": "2025-12-18T01:17:50.211794264Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee/hosts",
	        "LogPath": "/var/lib/docker/containers/abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee/abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee-json.log",
	        "Name": "/kubernetes-upgrade-675544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-675544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-675544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "abd55b6079cc0eb6c49ebbad0231fdbdcf5728ae0a05af290d8a92f90fa0d7ee",
	                "LowerDir": "/var/lib/docker/overlay2/a8d5c27a12f2ab520c826b11dfb7494d797704b5cee6ee39c89314e70da095da-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8d5c27a12f2ab520c826b11dfb7494d797704b5cee6ee39c89314e70da095da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8d5c27a12f2ab520c826b11dfb7494d797704b5cee6ee39c89314e70da095da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8d5c27a12f2ab520c826b11dfb7494d797704b5cee6ee39c89314e70da095da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-675544",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-675544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-675544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-675544",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-675544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5847034d29c7fecd6d7f4052acb67df60bc9e0548322743404516050cfd462e4",
	            "SandboxKey": "/var/run/docker/netns/5847034d29c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34127"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34128"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34131"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34129"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34130"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-675544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:2f:c1:43:3f:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6c5a5af0f144d8cefac70cc2966c124f297b3e11db29d7b17b2b735c3e612926",
	                    "EndpointID": "b00d2248a486dbc7db4556d772d11bceea4c33f38630775e8075bc25e33482df",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-675544",
	                        "abd55b6079cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-675544 -n kubernetes-upgrade-675544
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-675544 -n kubernetes-upgrade-675544: exit status 2 (420.969465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-675544 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-675544 logs -n 25: (1.010698327s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-459533 sudo systemctl status docker --all --full --no-pager                                            │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl cat docker --no-pager                                                            │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo cat /etc/docker/daemon.json                                                                │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo docker system info                                                                         │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo cri-dockerd --version                                                                      │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl cat containerd --no-pager                                                        │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo cat /etc/containerd/config.toml                                                            │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo containerd config dump                                                                     │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl status crio --all --full --no-pager                                              │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo systemctl cat crio --no-pager                                                              │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ ssh     │ -p cilium-459533 sudo crio config                                                                                │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │                     │
	│ delete  │ -p cilium-459533                                                                                                 │ cilium-459533            │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │ 18 Dec 25 01:26 UTC │
	│ start   │ -p force-systemd-env-984117 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-984117 │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │ 18 Dec 25 01:26 UTC │
	│ ssh     │ force-systemd-env-984117 ssh cat /etc/containerd/config.toml                                                     │ force-systemd-env-984117 │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │ 18 Dec 25 01:26 UTC │
	│ delete  │ -p force-systemd-env-984117                                                                                      │ force-systemd-env-984117 │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │ 18 Dec 25 01:26 UTC │
	│ start   │ -p cert-expiration-976781 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd     │ cert-expiration-976781   │ jenkins │ v1.37.0 │ 18 Dec 25 01:26 UTC │ 18 Dec 25 01:27 UTC │
	│ start   │ -p cert-expiration-976781 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd  │ cert-expiration-976781   │ jenkins │ v1.37.0 │ 18 Dec 25 01:30 UTC │ 18 Dec 25 01:30 UTC │
	│ delete  │ -p cert-expiration-976781                                                                                        │ cert-expiration-976781   │ jenkins │ v1.37.0 │ 18 Dec 25 01:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:30:19
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:30:19.945710 1503721 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:30:19.945863 1503721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:30:19.945867 1503721 out.go:374] Setting ErrFile to fd 2...
	I1218 01:30:19.945871 1503721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:30:19.946134 1503721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:30:19.946495 1503721 out.go:368] Setting JSON to false
	I1218 01:30:19.947480 1503721 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29566,"bootTime":1765991854,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:30:19.947536 1503721 start.go:143] virtualization:  
	I1218 01:30:19.953428 1503721 out.go:179] * [cert-expiration-976781] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:30:19.958768 1503721 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:30:19.958877 1503721 notify.go:221] Checking for updates...
	I1218 01:30:19.964944 1503721 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:30:19.968496 1503721 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:30:19.971512 1503721 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:30:19.974508 1503721 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:30:19.977508 1503721 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:30:19.980980 1503721 config.go:182] Loaded profile config "cert-expiration-976781": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:30:19.981572 1503721 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:30:20.021977 1503721 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:30:20.022165 1503721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:30:20.090578 1503721 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-18 01:30:20.079231207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:30:20.090889 1503721 docker.go:319] overlay module found
	I1218 01:30:20.094316 1503721 out.go:179] * Using the docker driver based on existing profile
	I1218 01:30:20.097413 1503721 start.go:309] selected driver: docker
	I1218 01:30:20.097425 1503721 start.go:927] validating driver "docker" against &{Name:cert-expiration-976781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-976781 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:30:20.097533 1503721 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:30:20.098341 1503721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:30:20.165111 1503721 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-18 01:30:20.154070465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:30:20.165457 1503721 cni.go:84] Creating CNI manager for ""
	I1218 01:30:20.165501 1503721 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:30:20.165539 1503721 start.go:353] cluster config:
	{Name:cert-expiration-976781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-976781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:30:20.168838 1503721 out.go:179] * Starting "cert-expiration-976781" primary control-plane node in "cert-expiration-976781" cluster
	I1218 01:30:20.171737 1503721 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:30:20.174612 1503721 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:30:20.177674 1503721 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 01:30:20.177718 1503721 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1218 01:30:20.177745 1503721 cache.go:65] Caching tarball of preloaded images
	I1218 01:30:20.177761 1503721 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:30:20.177835 1503721 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:30:20.177844 1503721 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1218 01:30:20.177962 1503721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/config.json ...
	I1218 01:30:20.198421 1503721 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:30:20.198433 1503721 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:30:20.198453 1503721 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:30:20.198483 1503721 start.go:360] acquireMachinesLock for cert-expiration-976781: {Name:mk15d19832474fc3ac5df1c966514fdde66820ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:30:20.198545 1503721 start.go:364] duration metric: took 45.948µs to acquireMachinesLock for "cert-expiration-976781"
	I1218 01:30:20.198565 1503721 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:30:20.198570 1503721 fix.go:54] fixHost starting: 
	I1218 01:30:20.198852 1503721 cli_runner.go:164] Run: docker container inspect cert-expiration-976781 --format={{.State.Status}}
	I1218 01:30:20.215903 1503721 fix.go:112] recreateIfNeeded on cert-expiration-976781: state=Running err=<nil>
	W1218 01:30:20.215924 1503721 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:30:20.219301 1503721 out.go:252] * Updating the running docker "cert-expiration-976781" container ...
	I1218 01:30:20.219333 1503721 machine.go:94] provisionDockerMachine start ...
	I1218 01:30:20.219435 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:20.237809 1503721 main.go:143] libmachine: Using SSH client type: native
	I1218 01:30:20.238144 1503721 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1218 01:30:20.238150 1503721 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:30:20.392271 1503721 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-976781
	
	I1218 01:30:20.392285 1503721 ubuntu.go:182] provisioning hostname "cert-expiration-976781"
	I1218 01:30:20.392349 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:20.410219 1503721 main.go:143] libmachine: Using SSH client type: native
	I1218 01:30:20.410518 1503721 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1218 01:30:20.410527 1503721 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-976781 && echo "cert-expiration-976781" | sudo tee /etc/hostname
	I1218 01:30:20.579027 1503721 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-976781
	
	I1218 01:30:20.579106 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:20.596359 1503721 main.go:143] libmachine: Using SSH client type: native
	I1218 01:30:20.596713 1503721 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1218 01:30:20.596727 1503721 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-976781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-976781/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-976781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:30:20.761116 1503721 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:30:20.761131 1503721 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:30:20.761159 1503721 ubuntu.go:190] setting up certificates
	I1218 01:30:20.761169 1503721 provision.go:84] configureAuth start
	I1218 01:30:20.761228 1503721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-976781
	I1218 01:30:20.779609 1503721 provision.go:143] copyHostCerts
	I1218 01:30:20.779691 1503721 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:30:20.779699 1503721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:30:20.779774 1503721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:30:20.779873 1503721 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:30:20.779877 1503721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:30:20.779901 1503721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:30:20.779966 1503721 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:30:20.779969 1503721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:30:20.779998 1503721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:30:20.780051 1503721 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-976781 san=[127.0.0.1 192.168.76.2 cert-expiration-976781 localhost minikube]
	I1218 01:30:21.140296 1503721 provision.go:177] copyRemoteCerts
	I1218 01:30:21.140346 1503721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:30:21.140383 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:21.157938 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:21.264726 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:30:21.282854 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1218 01:30:21.299319 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:30:21.316882 1503721 provision.go:87] duration metric: took 555.692729ms to configureAuth
	I1218 01:30:21.316900 1503721 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:30:21.317082 1503721 config.go:182] Loaded profile config "cert-expiration-976781": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:30:21.317094 1503721 machine.go:97] duration metric: took 1.097749373s to provisionDockerMachine
	I1218 01:30:21.317100 1503721 start.go:293] postStartSetup for "cert-expiration-976781" (driver="docker")
	I1218 01:30:21.317110 1503721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:30:21.317156 1503721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:30:21.317200 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:21.334251 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:21.446505 1503721 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:30:21.450218 1503721 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:30:21.450239 1503721 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:30:21.450249 1503721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:30:21.450303 1503721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:30:21.450382 1503721 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:30:21.450483 1503721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:30:21.458070 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:30:21.476919 1503721 start.go:296] duration metric: took 159.804866ms for postStartSetup
	I1218 01:30:21.476991 1503721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:30:21.477029 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:21.493976 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:21.600093 1503721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:30:21.605555 1503721 fix.go:56] duration metric: took 1.406977433s for fixHost
	I1218 01:30:21.605572 1503721 start.go:83] releasing machines lock for "cert-expiration-976781", held for 1.407018876s
	I1218 01:30:21.605675 1503721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-976781
	I1218 01:30:21.623644 1503721 ssh_runner.go:195] Run: cat /version.json
	I1218 01:30:21.623750 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:21.624020 1503721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:30:21.624074 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:21.649383 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:21.654076 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:21.756436 1503721 ssh_runner.go:195] Run: systemctl --version
	I1218 01:30:21.848358 1503721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:30:21.852900 1503721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:30:21.852970 1503721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:30:21.860927 1503721 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:30:21.860941 1503721 start.go:496] detecting cgroup driver to use...
	I1218 01:30:21.860970 1503721 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:30:21.861033 1503721 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:30:21.877439 1503721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:30:21.891462 1503721 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:30:21.891515 1503721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:30:21.908207 1503721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:30:21.921868 1503721 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:30:22.076230 1503721 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:30:22.255325 1503721 docker.go:234] disabling docker service ...
	I1218 01:30:22.255383 1503721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:30:22.273967 1503721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:30:22.287849 1503721 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:30:22.433408 1503721 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:30:22.581723 1503721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:30:22.594997 1503721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:30:22.610212 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:30:22.619301 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:30:22.628591 1503721 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:30:22.628739 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:30:22.637926 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:30:22.647258 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:30:22.657044 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:30:22.665780 1503721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:30:22.674005 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:30:22.683555 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:30:22.693187 1503721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:30:22.702513 1503721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:30:22.710248 1503721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:30:22.717845 1503721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:30:22.870848 1503721 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:30:23.196570 1503721 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:30:23.196679 1503721 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:30:23.201802 1503721 start.go:564] Will wait 60s for crictl version
	I1218 01:30:23.201856 1503721 ssh_runner.go:195] Run: which crictl
	I1218 01:30:23.205787 1503721 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:30:23.233108 1503721 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:30:23.233172 1503721 ssh_runner.go:195] Run: containerd --version
	I1218 01:30:23.253871 1503721 ssh_runner.go:195] Run: containerd --version
	I1218 01:30:23.285282 1503721 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1218 01:30:23.288370 1503721 cli_runner.go:164] Run: docker network inspect cert-expiration-976781 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:30:23.306979 1503721 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1218 01:30:23.311543 1503721 kubeadm.go:884] updating cluster {Name:cert-expiration-976781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-976781 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:30:23.311645 1503721 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 01:30:23.311727 1503721 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:30:23.344714 1503721 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:30:23.344728 1503721 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:30:23.344787 1503721 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:30:23.375824 1503721 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:30:23.375836 1503721 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:30:23.375842 1503721 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 containerd true true} ...
	I1218 01:30:23.375943 1503721 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-976781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-976781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:30:23.376013 1503721 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:30:23.404993 1503721 cni.go:84] Creating CNI manager for ""
	I1218 01:30:23.405007 1503721 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:30:23.405022 1503721 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:30:23.405046 1503721 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-976781 NodeName:cert-expiration-976781 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:30:23.405166 1503721 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-expiration-976781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:30:23.405232 1503721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1218 01:30:23.414081 1503721 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:30:23.414142 1503721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:30:23.423292 1503721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:30:23.437006 1503721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 01:30:23.450941 1503721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 01:30:23.463876 1503721 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:30:23.467519 1503721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:30:23.618723 1503721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:30:23.633839 1503721 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781 for IP: 192.168.76.2
	I1218 01:30:23.633850 1503721 certs.go:195] generating shared ca certs ...
	I1218 01:30:23.633881 1503721 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:23.634016 1503721 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:30:23.634062 1503721 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:30:23.634068 1503721 certs.go:257] generating profile certs ...
	W1218 01:30:23.634194 1503721 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1218 01:30:23.634216 1503721 certs.go:629] cert expired /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.crt: expiration: 2025-12-18 01:29:56 +0000 UTC, now: 2025-12-18 01:30:23.634210824 +0000 UTC m=+3.747966324
	I1218 01:30:23.634453 1503721 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.key
	I1218 01:30:23.634470 1503721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.crt with IP's: []
	I1218 01:30:23.872043 1503721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.crt ...
	I1218 01:30:23.872059 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.crt: {Name:mk8702d83329dba2fcc30f55950fbe1b359d6d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:23.872228 1503721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.key ...
	I1218 01:30:23.872236 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/client.key: {Name:mk627fe8df95286553681b5fbd7b7ec2a8d1f894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1218 01:30:23.872425 1503721 out.go:285] ! Certificate apiserver.crt.f8e22b29 has expired. Generating a new one...
	I1218 01:30:23.872493 1503721 certs.go:629] cert expired /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt.f8e22b29: expiration: 2025-12-18 01:29:56 +0000 UTC, now: 2025-12-18 01:30:23.872486369 +0000 UTC m=+3.986241894
	I1218 01:30:23.872592 1503721 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.key.f8e22b29
	I1218 01:30:23.872617 1503721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt.f8e22b29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1218 01:30:24.050518 1503721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt.f8e22b29 ...
	I1218 01:30:24.050535 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt.f8e22b29: {Name:mka64332242dbf26836ece09820d5637f33a48cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:24.050699 1503721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.key.f8e22b29 ...
	I1218 01:30:24.050714 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.key.f8e22b29: {Name:mk5859b1e085a35861b656c252821e7b35c0da39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:24.050792 1503721 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt.f8e22b29 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt
	I1218 01:30:24.050932 1503721 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.key.f8e22b29 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.key
	W1218 01:30:24.051116 1503721 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1218 01:30:24.051135 1503721 certs.go:629] cert expired /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.crt: expiration: 2025-12-18 01:29:56 +0000 UTC, now: 2025-12-18 01:30:24.051130628 +0000 UTC m=+4.164886120
	I1218 01:30:24.051202 1503721 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.key
	I1218 01:30:24.051216 1503721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.crt with IP's: []
	I1218 01:30:24.276128 1503721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.crt ...
	I1218 01:30:24.276149 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.crt: {Name:mk2282e19201c834eac983c956e02c43270357a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:24.276311 1503721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.key ...
	I1218 01:30:24.276319 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.key: {Name:mkc9820a6e6d791af6511462824253292ad7c59e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:24.276516 1503721 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:30:24.276555 1503721 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:30:24.276563 1503721 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:30:24.276589 1503721 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:30:24.276613 1503721 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:30:24.276661 1503721 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:30:24.276709 1503721 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:30:24.277258 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:30:24.300109 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:30:24.322197 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:30:24.340448 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:30:24.363445 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1218 01:30:24.384798 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:30:24.410857 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:30:24.429520 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/cert-expiration-976781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 01:30:24.450570 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:30:24.471649 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:30:24.497264 1503721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:30:24.516149 1503721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:30:24.528960 1503721 ssh_runner.go:195] Run: openssl version
	I1218 01:30:24.535061 1503721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:30:24.542612 1503721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:30:24.550395 1503721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:30:24.554187 1503721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:30:24.554241 1503721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:30:24.595608 1503721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:30:24.603157 1503721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:30:24.610369 1503721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:30:24.618390 1503721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:30:24.622382 1503721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:30:24.622481 1503721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:30:24.663775 1503721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:30:24.671336 1503721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:30:24.679158 1503721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:30:24.686948 1503721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:30:24.691211 1503721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:30:24.691270 1503721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:30:24.734568 1503721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:30:24.742431 1503721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:30:24.746280 1503721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:30:24.788607 1503721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:30:24.830359 1503721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:30:24.871489 1503721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:30:24.913873 1503721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:30:24.959302 1503721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:30:25.001231 1503721 kubeadm.go:401] StartCluster: {Name:cert-expiration-976781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-976781 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:30:25.001315 1503721 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:30:25.001405 1503721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:30:25.034812 1503721 cri.go:89] found id: "e1ee105cf5226d031a5b2429fc04cf797ca8ea8982529287c61a18b6b1ba7dde"
	I1218 01:30:25.034824 1503721 cri.go:89] found id: "ac73b3c6cc3b37e4eb005d7816abeeccb30cf3ec4eba32fb1e5e66adba36c8b0"
	I1218 01:30:25.034827 1503721 cri.go:89] found id: "dbae7d0e1d273c718551ac2a4832905f1df28895694dd762ad07d60a3d717bf5"
	I1218 01:30:25.034830 1503721 cri.go:89] found id: "0ba80402fefce735376cfebf6299904f5004b13c86165a65fd926080d48b0c2c"
	I1218 01:30:25.034833 1503721 cri.go:89] found id: "777ed9cfa1add62d1d6d852814b231814dc59423c8272d4ffd1b187af9a86124"
	I1218 01:30:25.034835 1503721 cri.go:89] found id: "3da815b3f7ba640f09148e82071f34b14afea4a97e73998f13dd2252d4d8e254"
	I1218 01:30:25.034838 1503721 cri.go:89] found id: "a220c9e9787a92f48edb3991895fe7103a3352de965a9e03cf538d3cd53bf960"
	I1218 01:30:25.034840 1503721 cri.go:89] found id: "b3e44d82644387e1d9ff37b49896d36a3d6dec5dde27f486f4d07b0ba97167b5"
	I1218 01:30:25.034842 1503721 cri.go:89] found id: ""
	I1218 01:30:25.034901 1503721 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1218 01:30:25.069160 1503721 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0ba80402fefce735376cfebf6299904f5004b13c86165a65fd926080d48b0c2c","pid":1744,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ba80402fefce735376cfebf6299904f5004b13c86165a65fd926080d48b0c2c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ba80402fefce735376cfebf6299904f5004b13c86165a65fd926080d48b0c2c/rootfs","created":"2025-12-18T01:27:23.063633951Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.3","io.kubernetes.cri.sandbox-id":"50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285","io.kubernetes.cri.sandbox-name":"kube-proxy-dcrpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9051bfa3-cd38-4f33-9412-d052c8c0cc6a"},"owner":"root"},{"ociVersion":"1.2.1","id":"16395fcf0f7bcc4f3e006f0b04e8663666ac7
1c0cb8cc791c9e9a1aeb54b7873","pid":2046,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/16395fcf0f7bcc4f3e006f0b04e8663666ac71c0cb8cc791c9e9a1aeb54b7873","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/16395fcf0f7bcc4f3e006f0b04e8663666ac71c0cb8cc791c9e9a1aeb54b7873/rootfs","created":"2025-12-18T01:27:35.45259407Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"16395fcf0f7bcc4f3e006f0b04e8663666ac71c0cb8cc791c9e9a1aeb54b7873","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f015f1cc-07d6-461d-85f1-8e3e1a28ab1c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sa
ndbox-uid":"f015f1cc-07d6-461d-85f1-8e3e1a28ab1c"},"owner":"root"},{"ociVersion":"1.2.1","id":"2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d","pid":1323,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d/rootfs","created":"2025-12-18T01:27:09.662964572Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-cert-expiration-976781_7e9d4ec42ea479dc3fd5d4a415740871","io.kuber
netes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7e9d4ec42ea479dc3fd5d4a415740871"},"owner":"root"},{"ociVersion":"1.2.1","id":"319390b145d45889a5bd958215b3e48f7d7750f69a45513bc742a84ee113e059","pid":2082,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/319390b145d45889a5bd958215b3e48f7d7750f69a45513bc742a84ee113e059","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/319390b145d45889a5bd958215b3e48f7d7750f69a45513bc742a84ee113e059/rootfs","created":"2025-12-18T01:27:35.516270043Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"319390b145d45889a5bd958215b3e48f7d7750f69a45
513bc742a84ee113e059","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-x2r68_9cff62c1-3e90-4583-8477-f5d541d06e60","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-x2r68","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9cff62c1-3e90-4583-8477-f5d541d06e60"},"owner":"root"},{"ociVersion":"1.2.1","id":"3da815b3f7ba640f09148e82071f34b14afea4a97e73998f13dd2252d4d8e254","pid":1429,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3da815b3f7ba640f09148e82071f34b14afea4a97e73998f13dd2252d4d8e254","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3da815b3f7ba640f09148e82071f34b14afea4a97e73998f13dd2252d4d8e254/rootfs","created":"2025-12-18T01:27:09.885593599Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-mana
ger:v1.34.3","io.kubernetes.cri.sandbox-id":"2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d","io.kubernetes.cri.sandbox-name":"kube-controller-manager-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7e9d4ec42ea479dc3fd5d4a415740871"},"owner":"root"},{"ociVersion":"1.2.1","id":"50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285","pid":1694,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285/rootfs","created":"2025-12-18T01:27:22.949623123Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"
2","io.kubernetes.cri.sandbox-id":"50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-dcrpd_9051bfa3-cd38-4f33-9412-d052c8c0cc6a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-dcrpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9051bfa3-cd38-4f33-9412-d052c8c0cc6a"},"owner":"root"},{"ociVersion":"1.2.1","id":"66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e","pid":1307,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e/rootfs","created":"2025-12-18T01:27:09.643704021Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io
.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-cert-expiration-976781_a5c6ece8419b790c5b2589883feb1289","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5c6ece8419b790c5b2589883feb1289"},"owner":"root"},{"ociVersion":"1.2.1","id":"6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7","pid":1288,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7/rootfs","created":"2025-
12-18T01:27:09.623251211Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-cert-expiration-976781_d36dc4a96e0fee82df714758f8978f44","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d36dc4a96e0fee82df714758f8978f44"},"owner":"root"},{"ociVersion":"1.2.1","id":"777ed9cfa1add62d1d6d852814b231814dc59423c8272d4ffd1b187af9a86124","pid":1436,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/777ed9cfa1add62d1d6d852814b231814dc59423c8272d4ffd1b187
af9a86124","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/777ed9cfa1add62d1d6d852814b231814dc59423c8272d4ffd1b187af9a86124/rootfs","created":"2025-12-18T01:27:09.88386614Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.3","io.kubernetes.cri.sandbox-id":"66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e","io.kubernetes.cri.sandbox-name":"kube-apiserver-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a5c6ece8419b790c5b2589883feb1289"},"owner":"root"},{"ociVersion":"1.2.1","id":"879ab1f92f1717827e8bcd2039dec82628c563f79b4c8326ccd058686aae3df4","pid":1253,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/879ab1f92f1717827e8bcd2039dec82628c563f79b4c8326ccd058686aae3df4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/879ab1f92f1717827e8bcd2039
dec82628c563f79b4c8326ccd058686aae3df4/rootfs","created":"2025-12-18T01:27:09.593760271Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"879ab1f92f1717827e8bcd2039dec82628c563f79b4c8326ccd058686aae3df4","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-cert-expiration-976781_78a0e55ae187d6e2125effb21c9d9784","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"78a0e55ae187d6e2125effb21c9d9784"},"owner":"root"},{"ociVersion":"1.2.1","id":"a220c9e9787a92f48edb3991895fe7103a3352de965a9e03cf538d3cd53bf960","pid":1404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a220c9e9787a
92f48edb3991895fe7103a3352de965a9e03cf538d3cd53bf960","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a220c9e9787a92f48edb3991895fe7103a3352de965a9e03cf538d3cd53bf960/rootfs","created":"2025-12-18T01:27:09.872864422Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"879ab1f92f1717827e8bcd2039dec82628c563f79b4c8326ccd058686aae3df4","io.kubernetes.cri.sandbox-name":"etcd-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"78a0e55ae187d6e2125effb21c9d9784"},"owner":"root"},{"ociVersion":"1.2.1","id":"ac73b3c6cc3b37e4eb005d7816abeeccb30cf3ec4eba32fb1e5e66adba36c8b0","pid":2107,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac73b3c6cc3b37e4eb005d7816abeeccb30cf3ec4eba32fb1e5e66adba36c8b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac73b3c6cc3b
37e4eb005d7816abeeccb30cf3ec4eba32fb1e5e66adba36c8b0/rootfs","created":"2025-12-18T01:27:35.590992713Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"16395fcf0f7bcc4f3e006f0b04e8663666ac71c0cb8cc791c9e9a1aeb54b7873","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f015f1cc-07d6-461d-85f1-8e3e1a28ab1c"},"owner":"root"},{"ociVersion":"1.2.1","id":"b3e44d82644387e1d9ff37b49896d36a3d6dec5dde27f486f4d07b0ba97167b5","pid":1376,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3e44d82644387e1d9ff37b49896d36a3d6dec5dde27f486f4d07b0ba97167b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3e44d82644387e1d9ff37b49896d36a3d6dec5dde27f486f4d07b0ba97167b5/rootfs","created":"2025-12-18T01:27:09.797661033Z",
"annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.3","io.kubernetes.cri.sandbox-id":"6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7","io.kubernetes.cri.sandbox-name":"kube-scheduler-cert-expiration-976781","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d36dc4a96e0fee82df714758f8978f44"},"owner":"root"},{"ociVersion":"1.2.1","id":"bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c","pid":1724,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c/rootfs","created":"2025-12-18T01:27:23.015323048Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-nam
e":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-wn4f6_f3585374-bdb8-45fb-abf7-8d272963011c","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-wn4f6","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f3585374-bdb8-45fb-abf7-8d272963011c"},"owner":"root"},{"ociVersion":"1.2.1","id":"dbae7d0e1d273c718551ac2a4832905f1df28895694dd762ad07d60a3d717bf5","pid":1928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbae7d0e1d273c718551ac2a4832905f1df28895694dd762ad07d60a3d717bf5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbae7d0e1d273c718551ac2a4832905f1df28895694dd762ad07d60a3d717bf5/rootfs","created
":"2025-12-18T01:27:24.511462462Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88","io.kubernetes.cri.sandbox-id":"bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c","io.kubernetes.cri.sandbox-name":"kindnet-wn4f6","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"f3585374-bdb8-45fb-abf7-8d272963011c"},"owner":"root"},{"ociVersion":"1.2.1","id":"e1ee105cf5226d031a5b2429fc04cf797ca8ea8982529287c61a18b6b1ba7dde","pid":2139,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1ee105cf5226d031a5b2429fc04cf797ca8ea8982529287c61a18b6b1ba7dde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1ee105cf5226d031a5b2429fc04cf797ca8ea8982529287c61a18b6b1ba7dde/rootfs","created":"2025-12-18T01:27:35.664832098Z","annotations":{"io.kubernetes.cri.container-name":"coredns","
io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"319390b145d45889a5bd958215b3e48f7d7750f69a45513bc742a84ee113e059","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-x2r68","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9cff62c1-3e90-4583-8477-f5d541d06e60"},"owner":"root"}]
	I1218 01:30:25.069459 1503721 cri.go:126] list returned 16 containers
	I1218 01:30:25.069469 1503721 cri.go:129] container: {ID:0ba80402fefce735376cfebf6299904f5004b13c86165a65fd926080d48b0c2c Status:running}
	I1218 01:30:25.069486 1503721 cri.go:135] skipping {0ba80402fefce735376cfebf6299904f5004b13c86165a65fd926080d48b0c2c running}: state = "running", want "paused"
	I1218 01:30:25.069493 1503721 cri.go:129] container: {ID:16395fcf0f7bcc4f3e006f0b04e8663666ac71c0cb8cc791c9e9a1aeb54b7873 Status:running}
	I1218 01:30:25.069498 1503721 cri.go:131] skipping 16395fcf0f7bcc4f3e006f0b04e8663666ac71c0cb8cc791c9e9a1aeb54b7873 - not in ps
	I1218 01:30:25.069501 1503721 cri.go:129] container: {ID:2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d Status:running}
	I1218 01:30:25.069507 1503721 cri.go:131] skipping 2dcab59b267faa01a1374ccc12f66853e5c1bfeb044d1cdd633bfcad7f98c08d - not in ps
	I1218 01:30:25.069510 1503721 cri.go:129] container: {ID:319390b145d45889a5bd958215b3e48f7d7750f69a45513bc742a84ee113e059 Status:running}
	I1218 01:30:25.069515 1503721 cri.go:131] skipping 319390b145d45889a5bd958215b3e48f7d7750f69a45513bc742a84ee113e059 - not in ps
	I1218 01:30:25.069517 1503721 cri.go:129] container: {ID:3da815b3f7ba640f09148e82071f34b14afea4a97e73998f13dd2252d4d8e254 Status:running}
	I1218 01:30:25.069522 1503721 cri.go:135] skipping {3da815b3f7ba640f09148e82071f34b14afea4a97e73998f13dd2252d4d8e254 running}: state = "running", want "paused"
	I1218 01:30:25.069525 1503721 cri.go:129] container: {ID:50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285 Status:running}
	I1218 01:30:25.069530 1503721 cri.go:131] skipping 50a6484577e7f65c7f11bc7ed239770115527025d5729c51e5f18b9b50cee285 - not in ps
	I1218 01:30:25.069533 1503721 cri.go:129] container: {ID:66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e Status:running}
	I1218 01:30:25.069538 1503721 cri.go:131] skipping 66c257c546639e9b8fac2b1c396f15546266402a993c3e5814b93125cad91b8e - not in ps
	I1218 01:30:25.069541 1503721 cri.go:129] container: {ID:6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7 Status:running}
	I1218 01:30:25.069544 1503721 cri.go:131] skipping 6c39eea028c21cfe04cdaf13d2ad6ba7c60c899d5b0e9de154d574cdc5f43dc7 - not in ps
	I1218 01:30:25.069550 1503721 cri.go:129] container: {ID:777ed9cfa1add62d1d6d852814b231814dc59423c8272d4ffd1b187af9a86124 Status:running}
	I1218 01:30:25.069555 1503721 cri.go:135] skipping {777ed9cfa1add62d1d6d852814b231814dc59423c8272d4ffd1b187af9a86124 running}: state = "running", want "paused"
	I1218 01:30:25.069558 1503721 cri.go:129] container: {ID:879ab1f92f1717827e8bcd2039dec82628c563f79b4c8326ccd058686aae3df4 Status:running}
	I1218 01:30:25.069562 1503721 cri.go:131] skipping 879ab1f92f1717827e8bcd2039dec82628c563f79b4c8326ccd058686aae3df4 - not in ps
	I1218 01:30:25.069566 1503721 cri.go:129] container: {ID:a220c9e9787a92f48edb3991895fe7103a3352de965a9e03cf538d3cd53bf960 Status:running}
	I1218 01:30:25.069570 1503721 cri.go:135] skipping {a220c9e9787a92f48edb3991895fe7103a3352de965a9e03cf538d3cd53bf960 running}: state = "running", want "paused"
	I1218 01:30:25.069575 1503721 cri.go:129] container: {ID:ac73b3c6cc3b37e4eb005d7816abeeccb30cf3ec4eba32fb1e5e66adba36c8b0 Status:running}
	I1218 01:30:25.069580 1503721 cri.go:135] skipping {ac73b3c6cc3b37e4eb005d7816abeeccb30cf3ec4eba32fb1e5e66adba36c8b0 running}: state = "running", want "paused"
	I1218 01:30:25.069583 1503721 cri.go:129] container: {ID:b3e44d82644387e1d9ff37b49896d36a3d6dec5dde27f486f4d07b0ba97167b5 Status:running}
	I1218 01:30:25.069586 1503721 cri.go:135] skipping {b3e44d82644387e1d9ff37b49896d36a3d6dec5dde27f486f4d07b0ba97167b5 running}: state = "running", want "paused"
	I1218 01:30:25.069589 1503721 cri.go:129] container: {ID:bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c Status:running}
	I1218 01:30:25.069594 1503721 cri.go:131] skipping bdd41f6c34b48d9252b86d94ad0c46b83d7e0d53c84fe864302aee0f969ad95c - not in ps
	I1218 01:30:25.069597 1503721 cri.go:129] container: {ID:dbae7d0e1d273c718551ac2a4832905f1df28895694dd762ad07d60a3d717bf5 Status:running}
	I1218 01:30:25.069602 1503721 cri.go:135] skipping {dbae7d0e1d273c718551ac2a4832905f1df28895694dd762ad07d60a3d717bf5 running}: state = "running", want "paused"
	I1218 01:30:25.069606 1503721 cri.go:129] container: {ID:e1ee105cf5226d031a5b2429fc04cf797ca8ea8982529287c61a18b6b1ba7dde Status:running}
	I1218 01:30:25.069610 1503721 cri.go:135] skipping {e1ee105cf5226d031a5b2429fc04cf797ca8ea8982529287c61a18b6b1ba7dde running}: state = "running", want "paused"
	I1218 01:30:25.069707 1503721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:30:25.079063 1503721 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:30:25.079073 1503721 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:30:25.079141 1503721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:30:25.088068 1503721 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:30:25.088921 1503721 kubeconfig.go:125] found "cert-expiration-976781" server: "https://192.168.76.2:8443"
	I1218 01:30:25.091490 1503721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:30:25.100880 1503721 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1218 01:30:25.100923 1503721 kubeadm.go:602] duration metric: took 21.834022ms to restartPrimaryControlPlane
	I1218 01:30:25.100933 1503721 kubeadm.go:403] duration metric: took 99.715424ms to StartCluster
	I1218 01:30:25.100950 1503721 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:25.101042 1503721 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:30:25.102093 1503721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:30:25.102393 1503721 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:30:25.102601 1503721 config.go:182] Loaded profile config "cert-expiration-976781": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:30:25.102643 1503721 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:30:25.102852 1503721 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-976781"
	I1218 01:30:25.102874 1503721 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-976781"
	W1218 01:30:25.102881 1503721 addons.go:248] addon storage-provisioner should already be in state true
	I1218 01:30:25.102906 1503721 host.go:66] Checking if "cert-expiration-976781" exists ...
	I1218 01:30:25.103425 1503721 cli_runner.go:164] Run: docker container inspect cert-expiration-976781 --format={{.State.Status}}
	I1218 01:30:25.104029 1503721 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-976781"
	I1218 01:30:25.104052 1503721 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-976781"
	I1218 01:30:25.104435 1503721 cli_runner.go:164] Run: docker container inspect cert-expiration-976781 --format={{.State.Status}}
	I1218 01:30:25.107413 1503721 out.go:179] * Verifying Kubernetes components...
	I1218 01:30:25.118593 1503721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:30:25.138414 1503721 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-976781"
	W1218 01:30:25.138428 1503721 addons.go:248] addon default-storageclass should already be in state true
	I1218 01:30:25.138452 1503721 host.go:66] Checking if "cert-expiration-976781" exists ...
	I1218 01:30:25.138942 1503721 cli_runner.go:164] Run: docker container inspect cert-expiration-976781 --format={{.State.Status}}
	I1218 01:30:25.151588 1503721 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:30:25.372983 1458839 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001015673s
	I1218 01:30:25.373274 1458839 kubeadm.go:319] 
	I1218 01:30:25.373345 1458839 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:30:25.373379 1458839 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:30:25.373484 1458839 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:30:25.373489 1458839 kubeadm.go:319] 
	I1218 01:30:25.373594 1458839 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:30:25.373626 1458839 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:30:25.373667 1458839 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:30:25.373672 1458839 kubeadm.go:319] 
	I1218 01:30:25.377741 1458839 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:30:25.378176 1458839 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:30:25.378286 1458839 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:30:25.378548 1458839 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 01:30:25.378553 1458839 kubeadm.go:319] 
	I1218 01:30:25.378622 1458839 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:30:25.378677 1458839 kubeadm.go:403] duration metric: took 12m18.62298779s to StartCluster
	I1218 01:30:25.378710 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:30:25.378773 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:30:25.425001 1458839 cri.go:89] found id: ""
	I1218 01:30:25.425025 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.425035 1458839 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:30:25.425041 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:30:25.425099 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:30:25.487693 1458839 cri.go:89] found id: ""
	I1218 01:30:25.487715 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.487723 1458839 logs.go:284] No container was found matching "etcd"
	I1218 01:30:25.487730 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:30:25.487855 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:30:25.530922 1458839 cri.go:89] found id: ""
	I1218 01:30:25.530945 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.530953 1458839 logs.go:284] No container was found matching "coredns"
	I1218 01:30:25.530959 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:30:25.531024 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:30:25.580163 1458839 cri.go:89] found id: ""
	I1218 01:30:25.580196 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.580205 1458839 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:30:25.580218 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:30:25.580290 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:30:25.611616 1458839 cri.go:89] found id: ""
	I1218 01:30:25.611643 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.611652 1458839 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:30:25.611658 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:30:25.611717 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:30:25.651571 1458839 cri.go:89] found id: ""
	I1218 01:30:25.651598 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.651607 1458839 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:30:25.651614 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:30:25.651673 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:30:25.702487 1458839 cri.go:89] found id: ""
	I1218 01:30:25.702511 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.702520 1458839 logs.go:284] No container was found matching "kindnet"
	I1218 01:30:25.702526 1458839 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1218 01:30:25.702590 1458839 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1218 01:30:25.751157 1458839 cri.go:89] found id: ""
	I1218 01:30:25.751182 1458839 logs.go:282] 0 containers: []
	W1218 01:30:25.751191 1458839 logs.go:284] No container was found matching "storage-provisioner"
	I1218 01:30:25.751201 1458839 logs.go:123] Gathering logs for kubelet ...
	I1218 01:30:25.751213 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:30:25.823563 1458839 logs.go:123] Gathering logs for dmesg ...
	I1218 01:30:25.823665 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:30:25.842924 1458839 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:30:25.842956 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:30:25.952149 1458839 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:30:25.952222 1458839 logs.go:123] Gathering logs for containerd ...
	I1218 01:30:25.952261 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:30:26.014471 1458839 logs.go:123] Gathering logs for container status ...
	I1218 01:30:26.014555 1458839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1218 01:30:26.062166 1458839 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:30:26.062211 1458839 out.go:285] * 
	W1218 01:30:26.062262 1458839 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:30:26.062273 1458839 out.go:285] * 
	W1218 01:30:26.064418 1458839 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:30:26.069406 1458839 out.go:203] 
	W1218 01:30:26.071604 1458839 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001015673s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:30:26.071667 1458839 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:30:26.071689 1458839 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:30:26.074955 1458839 out.go:203] 
	I1218 01:30:25.154906 1503721 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:30:25.154918 1503721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:30:25.154995 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:25.191698 1503721 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:30:25.191712 1503721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:30:25.191787 1503721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-976781
	I1218 01:30:25.196889 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:25.220567 1503721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/cert-expiration-976781/id_rsa Username:docker}
	I1218 01:30:25.368913 1503721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:30:25.401830 1503721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:30:25.439865 1503721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:30:26.553114 1503721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.184175265s)
	I1218 01:30:26.553148 1503721 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.151308239s)
	I1218 01:30:26.553181 1503721 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:30:26.553238 1503721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:30:26.553295 1503721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113417798s)
	I1218 01:30:26.582049 1503721 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1218 01:30:26.584934 1503721 addons.go:530] duration metric: took 1.48228206s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1218 01:30:26.586476 1503721 api_server.go:72] duration metric: took 1.484052029s to wait for apiserver process to appear ...
	I1218 01:30:26.586490 1503721 api_server.go:88] waiting for apiserver healthz status ...
	I1218 01:30:26.586507 1503721 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1218 01:30:26.595385 1503721 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1218 01:30:26.596609 1503721 api_server.go:141] control plane version: v1.34.3
	I1218 01:30:26.596664 1503721 api_server.go:131] duration metric: took 10.169041ms to wait for apiserver health ...
	I1218 01:30:26.596672 1503721 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 01:30:26.601465 1503721 system_pods.go:59] 8 kube-system pods found
	I1218 01:30:26.601483 1503721 system_pods.go:61] "coredns-66bc5c9577-x2r68" [9cff62c1-3e90-4583-8477-f5d541d06e60] Running
	I1218 01:30:26.601488 1503721 system_pods.go:61] "etcd-cert-expiration-976781" [dc56d7b7-2dd2-41f5-be16-14dcaf92e6b1] Running
	I1218 01:30:26.601491 1503721 system_pods.go:61] "kindnet-wn4f6" [f3585374-bdb8-45fb-abf7-8d272963011c] Running
	I1218 01:30:26.601493 1503721 system_pods.go:61] "kube-apiserver-cert-expiration-976781" [8b1c9c39-9edc-4e09-987d-344a4c04015c] Running
	I1218 01:30:26.601496 1503721 system_pods.go:61] "kube-controller-manager-cert-expiration-976781" [23546291-bf69-49eb-8cb1-aaff884f2cda] Running
	I1218 01:30:26.601499 1503721 system_pods.go:61] "kube-proxy-dcrpd" [9051bfa3-cd38-4f33-9412-d052c8c0cc6a] Running
	I1218 01:30:26.601502 1503721 system_pods.go:61] "kube-scheduler-cert-expiration-976781" [54b8b495-8eed-46ab-a2f2-a1d4fc6bd23e] Running
	I1218 01:30:26.601505 1503721 system_pods.go:61] "storage-provisioner" [f015f1cc-07d6-461d-85f1-8e3e1a28ab1c] Running
	I1218 01:30:26.601510 1503721 system_pods.go:74] duration metric: took 4.833577ms to wait for pod list to return data ...
	I1218 01:30:26.601521 1503721 kubeadm.go:587] duration metric: took 1.499101833s to wait for: map[apiserver:true system_pods:true]
	I1218 01:30:26.601532 1503721 node_conditions.go:102] verifying NodePressure condition ...
	I1218 01:30:26.604649 1503721 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 01:30:26.604669 1503721 node_conditions.go:123] node cpu capacity is 2
	I1218 01:30:26.604681 1503721 node_conditions.go:105] duration metric: took 3.144985ms to run NodePressure ...
	I1218 01:30:26.604693 1503721 start.go:242] waiting for startup goroutines ...
	I1218 01:30:26.604699 1503721 start.go:247] waiting for cluster config update ...
	I1218 01:30:26.604709 1503721 start.go:256] writing updated cluster config ...
	I1218 01:30:26.604989 1503721 ssh_runner.go:195] Run: rm -f paused
	I1218 01:30:26.696262 1503721 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1218 01:30:26.699630 1503721 out.go:179] * Done! kubectl is now configured to use "cert-expiration-976781" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:22:17 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:17.036171914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:17 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:17.037020788Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" with image id \"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.0-rc.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5\", size \"22432301\" in 1.527937695s"
	Dec 18 01:22:17 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:17.037147456Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" returns image reference \"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\""
	Dec 18 01:22:17 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:17.038018861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.496048641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.499448126Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=20453241"
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.503034814Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.511596638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.513040288Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.474982174s"
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.513092602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 18 01:22:18 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:18.515772637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\""
	Dec 18 01:22:20 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:20.525793350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:20 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:20.527608585Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21753021"
	Dec 18 01:22:20 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:20.530044228Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:20 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:20.533939151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:22:20 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:20.535034305Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 2.01921942s"
	Dec 18 01:22:20 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:22:20.535151693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\""
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.617029131Z" level=info msg="container event discarded" container=a324ed69126df710e31cf6e578c3cfbccc8d3a8ac9f34b6f964e47d4561435cd type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.631281759Z" level=info msg="container event discarded" container=41b9189aeac02141bf7617c2280a9e381c45516edcf894c5fb1a22b8c73323a4 type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.645015384Z" level=info msg="container event discarded" container=f57ce09638f6239203d830084309308e2dd7bdab9e772a3f16332f63756b3306 type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.645059420Z" level=info msg="container event discarded" container=97d51cae09ba8b689798fcde1f290a01485ad2ee20ab36e803a63c2fac5605fb type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.660239780Z" level=info msg="container event discarded" container=2894a9531ca42ef5e062d2a9b68d739b2d960778ebf947c557dcf88da8b1fdaa type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.660292243Z" level=info msg="container event discarded" container=d3ff3b6ef8b6b0398ebcc6466aa6b80d387bb210031136470c54cdf02d58b562 type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.676504685Z" level=info msg="container event discarded" container=0b6b3e3234f2dd4d296dc81bdcdb769761220f77d0d5dfdacbc0332135b31322 type=CONTAINER_DELETED_EVENT
	Dec 18 01:27:10 kubernetes-upgrade-675544 containerd[555]: time="2025-12-18T01:27:10.676556139Z" level=info msg="container event discarded" container=1832ce9680ddb267415d86aa676be120f36dd5464e8462aed1d74cc90c509162 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:30:28 up  8:12,  0 user,  load average: 1.23, 1.63, 1.98
	Linux kubernetes-upgrade-675544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:30:24 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:25 kubernetes-upgrade-675544 kubelet[14302]: E1218 01:30:25.249396   14302 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:25 kubernetes-upgrade-675544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:26 kubernetes-upgrade-675544 kubelet[14383]: E1218 01:30:26.103535   14383 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:26 kubernetes-upgrade-675544 kubelet[14403]: E1218 01:30:26.984966   14403 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:30:26 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:30:27 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 18 01:30:27 kubernetes-upgrade-675544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:27 kubernetes-upgrade-675544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:30:27 kubernetes-upgrade-675544 kubelet[14424]: E1218 01:30:27.788083   14424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:30:27 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:30:27 kubernetes-upgrade-675544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-675544 -n kubernetes-upgrade-675544
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-675544 -n kubernetes-upgrade-675544: exit status 2 (450.658064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-675544" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-675544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-675544
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-675544: (2.484940536s)
--- FAIL: TestKubernetesUpgrade (802.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (511.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1218 01:31:27.471052 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m29.62108056s)

                                                
                                                
-- stdout --
	* [no-preload-970975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-970975" primary control-plane node in "no-preload-970975" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:31:15.825655 1510702 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:31:15.825796 1510702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:31:15.825808 1510702 out.go:374] Setting ErrFile to fd 2...
	I1218 01:31:15.825814 1510702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:31:15.826071 1510702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:31:15.826508 1510702 out.go:368] Setting JSON to false
	I1218 01:31:15.827474 1510702 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29622,"bootTime":1765991854,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:31:15.827550 1510702 start.go:143] virtualization:  
	I1218 01:31:15.831286 1510702 out.go:179] * [no-preload-970975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:31:15.835752 1510702 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:31:15.835845 1510702 notify.go:221] Checking for updates...
	I1218 01:31:15.842588 1510702 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:31:15.845740 1510702 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:31:15.848831 1510702 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:31:15.851916 1510702 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:31:15.854949 1510702 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:31:15.858529 1510702 config.go:182] Loaded profile config "old-k8s-version-207212": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1218 01:31:15.858630 1510702 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:31:15.884871 1510702 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:31:15.885009 1510702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:31:15.958680 1510702 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:31:15.946371735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:31:15.958791 1510702 docker.go:319] overlay module found
	I1218 01:31:15.962198 1510702 out.go:179] * Using the docker driver based on user configuration
	I1218 01:31:15.965148 1510702 start.go:309] selected driver: docker
	I1218 01:31:15.965172 1510702 start.go:927] validating driver "docker" against <nil>
	I1218 01:31:15.965187 1510702 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:31:15.965956 1510702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:31:16.030911 1510702 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:31:16.021169608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:31:16.031062 1510702 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1218 01:31:16.031301 1510702 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 01:31:16.034279 1510702 out.go:179] * Using Docker driver with root privileges
	I1218 01:31:16.037139 1510702 cni.go:84] Creating CNI manager for ""
	I1218 01:31:16.037210 1510702 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:31:16.037226 1510702 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:31:16.037306 1510702 start.go:353] cluster config:
	{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:31:16.042243 1510702 out.go:179] * Starting "no-preload-970975" primary control-plane node in "no-preload-970975" cluster
	I1218 01:31:16.045032 1510702 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:31:16.048004 1510702 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:31:16.050914 1510702 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:31:16.051003 1510702 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:31:16.051059 1510702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:31:16.051089 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json: {Name:mk47295f5e178560ccc452fd2b5507a6b61d2149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:16.051330 1510702 cache.go:107] acquiring lock: {Name:mkbe76c9f71177ead8df5bdae626dba72c24e88a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.051403 1510702 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1218 01:31:16.051417 1510702 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.58µs
	I1218 01:31:16.051430 1510702 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1218 01:31:16.051445 1510702 cache.go:107] acquiring lock: {Name:mk73deadf102b9ef2729ab344cb753d1e81c8e69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.051511 1510702 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:16.051839 1510702 cache.go:107] acquiring lock: {Name:mk08959f4f9aec2f8cb7736193533393f169491b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.051985 1510702 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:16.052138 1510702 cache.go:107] acquiring lock: {Name:mk51756ddbebcd3ad705096b7bac91c4370ab67f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.052254 1510702 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:16.052380 1510702 cache.go:107] acquiring lock: {Name:mkb0d564e902314f0008f6dd25799cc8c98892bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.053017 1510702 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:16.053583 1510702 cache.go:107] acquiring lock: {Name:mkf6c55bc605708b579c41afc97203c8d4e81ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.053712 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:16.053758 1510702 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:16.054122 1510702 cache.go:107] acquiring lock: {Name:mk1ebccb0216e63c057736909b9d1bea2501f43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.054198 1510702 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1218 01:31:16.054207 1510702 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 91.723µs
	I1218 01:31:16.054215 1510702 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1218 01:31:16.054226 1510702 cache.go:107] acquiring lock: {Name:mk273a40d27e5765473ae1c9ccf1347edbca61c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.054293 1510702 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:16.053603 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:16.055050 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:16.055674 1510702 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:16.056583 1510702 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:16.056954 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:16.078130 1510702 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:31:16.078158 1510702 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:31:16.078175 1510702 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:31:16.078207 1510702 start.go:360] acquireMachinesLock for no-preload-970975: {Name:mkc5466bd6e57a370f52d05d09914f47211c2efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:31:16.078312 1510702 start.go:364] duration metric: took 85.045µs to acquireMachinesLock for "no-preload-970975"
	I1218 01:31:16.078354 1510702 start.go:93] Provisioning new machine with config: &{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:31:16.078427 1510702 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:31:16.083909 1510702 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:31:16.084183 1510702 start.go:159] libmachine.API.Create for "no-preload-970975" (driver="docker")
	I1218 01:31:16.084210 1510702 client.go:173] LocalClient.Create starting
	I1218 01:31:16.084292 1510702 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:31:16.084326 1510702 main.go:143] libmachine: Decoding PEM data...
	I1218 01:31:16.084341 1510702 main.go:143] libmachine: Parsing certificate...
	I1218 01:31:16.084396 1510702 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:31:16.084415 1510702 main.go:143] libmachine: Decoding PEM data...
	I1218 01:31:16.084427 1510702 main.go:143] libmachine: Parsing certificate...
	I1218 01:31:16.084967 1510702 cli_runner.go:164] Run: docker network inspect no-preload-970975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:31:16.103581 1510702 cli_runner.go:211] docker network inspect no-preload-970975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:31:16.103680 1510702 network_create.go:284] running [docker network inspect no-preload-970975] to gather additional debugging logs...
	I1218 01:31:16.103702 1510702 cli_runner.go:164] Run: docker network inspect no-preload-970975
	W1218 01:31:16.121384 1510702 cli_runner.go:211] docker network inspect no-preload-970975 returned with exit code 1
	I1218 01:31:16.121411 1510702 network_create.go:287] error running [docker network inspect no-preload-970975]: docker network inspect no-preload-970975: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-970975 not found
	I1218 01:31:16.121426 1510702 network_create.go:289] output of [docker network inspect no-preload-970975]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-970975 not found
	
	** /stderr **
	I1218 01:31:16.121531 1510702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:31:16.146479 1510702 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:31:16.146880 1510702 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:31:16.147134 1510702 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:31:16.147581 1510702 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bcef50}
	I1218 01:31:16.147605 1510702 network_create.go:124] attempt to create docker network no-preload-970975 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1218 01:31:16.147666 1510702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-970975 no-preload-970975
	I1218 01:31:16.225227 1510702 network_create.go:108] docker network no-preload-970975 192.168.76.0/24 created
	I1218 01:31:16.225268 1510702 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-970975" container
	I1218 01:31:16.225361 1510702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:31:16.243593 1510702 cli_runner.go:164] Run: docker volume create no-preload-970975 --label name.minikube.sigs.k8s.io=no-preload-970975 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:31:16.264881 1510702 oci.go:103] Successfully created a docker volume no-preload-970975
	I1218 01:31:16.264967 1510702 cli_runner.go:164] Run: docker run --rm --name no-preload-970975-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-970975 --entrypoint /usr/bin/test -v no-preload-970975:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:31:16.398276 1510702 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1218 01:31:16.425745 1510702 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1218 01:31:16.442145 1510702 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1218 01:31:16.487195 1510702 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1218 01:31:16.510324 1510702 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1218 01:31:16.530184 1510702 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1218 01:31:16.860398 1510702 cache.go:157] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1218 01:31:16.860479 1510702 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 806.898097ms
	I1218 01:31:16.860500 1510702 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1218 01:31:16.967644 1510702 oci.go:107] Successfully prepared a docker volume no-preload-970975
	I1218 01:31:16.967695 1510702 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	W1218 01:31:16.967824 1510702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:31:16.967934 1510702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:31:17.049191 1510702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-970975 --name no-preload-970975 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-970975 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-970975 --network no-preload-970975 --ip 192.168.76.2 --volume no-preload-970975:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:31:17.342435 1510702 cache.go:157] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1218 01:31:17.342461 1510702 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 1.288234189s
	I1218 01:31:17.342475 1510702 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1218 01:31:17.549965 1510702 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Running}}
	I1218 01:31:17.558614 1510702 cache.go:157] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1218 01:31:17.559138 1510702 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.507297471s
	I1218 01:31:17.559429 1510702 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1218 01:31:17.590663 1510702 cache.go:157] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1218 01:31:17.590910 1510702 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.538772046s
	I1218 01:31:17.590956 1510702 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1218 01:31:17.640109 1510702 cache.go:157] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1218 01:31:17.640186 1510702 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.587812206s
	I1218 01:31:17.640213 1510702 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1218 01:31:17.640350 1510702 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:31:17.712305 1510702 cache.go:157] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1218 01:31:17.713259 1510702 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.661806356s
	I1218 01:31:17.713438 1510702 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1218 01:31:17.713655 1510702 cache.go:87] Successfully saved all images to host disk.
	I1218 01:31:17.713612 1510702 cli_runner.go:164] Run: docker exec no-preload-970975 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:31:17.783861 1510702 oci.go:144] the created container "no-preload-970975" has a running status.
	I1218 01:31:17.783893 1510702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa...
	I1218 01:31:18.183874 1510702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:31:18.212112 1510702 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:31:18.237551 1510702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:31:18.237570 1510702 kic_runner.go:114] Args: [docker exec --privileged no-preload-970975 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:31:18.292156 1510702 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:31:18.314285 1510702 machine.go:94] provisionDockerMachine start ...
	I1218 01:31:18.314395 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:18.334347 1510702 main.go:143] libmachine: Using SSH client type: native
	I1218 01:31:18.334965 1510702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1218 01:31:18.334987 1510702 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:31:18.335834 1510702 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:31:21.509127 1510702 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:31:21.509165 1510702 ubuntu.go:182] provisioning hostname "no-preload-970975"
	I1218 01:31:21.509277 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:21.531805 1510702 main.go:143] libmachine: Using SSH client type: native
	I1218 01:31:21.532201 1510702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1218 01:31:21.532224 1510702 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-970975 && echo "no-preload-970975" | sudo tee /etc/hostname
	I1218 01:31:21.709277 1510702 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:31:21.709404 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:21.730805 1510702 main.go:143] libmachine: Using SSH client type: native
	I1218 01:31:21.731114 1510702 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1218 01:31:21.731135 1510702 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970975/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:31:21.901068 1510702 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:31:21.901094 1510702 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:31:21.901115 1510702 ubuntu.go:190] setting up certificates
	I1218 01:31:21.901124 1510702 provision.go:84] configureAuth start
	I1218 01:31:21.901187 1510702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:31:21.918018 1510702 provision.go:143] copyHostCerts
	I1218 01:31:21.918095 1510702 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:31:21.918110 1510702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:31:21.918188 1510702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:31:21.918286 1510702 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:31:21.918296 1510702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:31:21.918322 1510702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:31:21.918380 1510702 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:31:21.918389 1510702 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:31:21.918415 1510702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:31:21.918471 1510702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.no-preload-970975 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970975]
	I1218 01:31:22.039122 1510702 provision.go:177] copyRemoteCerts
	I1218 01:31:22.039196 1510702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:31:22.039245 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:22.058272 1510702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:31:22.169778 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:31:22.199678 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:31:22.222196 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:31:22.246071 1510702 provision.go:87] duration metric: took 344.919302ms to configureAuth
	I1218 01:31:22.246105 1510702 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:31:22.246292 1510702 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:31:22.246305 1510702 machine.go:97] duration metric: took 3.932002394s to provisionDockerMachine
	I1218 01:31:22.246312 1510702 client.go:176] duration metric: took 6.16209529s to LocalClient.Create
	I1218 01:31:22.246326 1510702 start.go:167] duration metric: took 6.162146071s to libmachine.API.Create "no-preload-970975"
	I1218 01:31:22.246342 1510702 start.go:293] postStartSetup for "no-preload-970975" (driver="docker")
	I1218 01:31:22.246354 1510702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:31:22.246407 1510702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:31:22.246453 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:22.270038 1510702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:31:22.381429 1510702 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:31:22.385668 1510702 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:31:22.385736 1510702 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:31:22.385761 1510702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:31:22.385860 1510702 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:31:22.385977 1510702 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:31:22.386088 1510702 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:31:22.394115 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:31:22.413278 1510702 start.go:296] duration metric: took 166.920376ms for postStartSetup
	I1218 01:31:22.413660 1510702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:31:22.432255 1510702 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:31:22.432545 1510702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:31:22.432598 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:22.450564 1510702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:31:22.553933 1510702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:31:22.558883 1510702 start.go:128] duration metric: took 6.480441392s to createHost
	I1218 01:31:22.558908 1510702 start.go:83] releasing machines lock for "no-preload-970975", held for 6.480580326s
	I1218 01:31:22.558981 1510702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:31:22.584893 1510702 ssh_runner.go:195] Run: cat /version.json
	I1218 01:31:22.584944 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:22.585185 1510702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:31:22.585253 1510702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:31:22.609680 1510702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:31:22.618277 1510702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:31:22.720767 1510702 ssh_runner.go:195] Run: systemctl --version
	I1218 01:31:22.827792 1510702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:31:22.832810 1510702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:31:22.832909 1510702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:31:22.862248 1510702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:31:22.862284 1510702 start.go:496] detecting cgroup driver to use...
	I1218 01:31:22.862321 1510702 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:31:22.862378 1510702 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:31:22.877263 1510702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:31:22.893139 1510702 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:31:22.893214 1510702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:31:22.911945 1510702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:31:22.932134 1510702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:31:23.056787 1510702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:31:23.216145 1510702 docker.go:234] disabling docker service ...
	I1218 01:31:23.216215 1510702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:31:23.247785 1510702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:31:23.263169 1510702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:31:23.412173 1510702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:31:23.543057 1510702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:31:23.563513 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:31:23.579538 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:31:23.597803 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:31:23.613917 1510702 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:31:23.613992 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:31:23.631968 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:31:23.646444 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:31:23.656141 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:31:23.668829 1510702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:31:23.681840 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:31:23.691159 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:31:23.708249 1510702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:31:23.721235 1510702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:31:23.739020 1510702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:31:23.751218 1510702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:31:23.937182 1510702 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:31:24.056535 1510702 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:31:24.056729 1510702 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:31:24.062123 1510702 start.go:564] Will wait 60s for crictl version
	I1218 01:31:24.062249 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.067261 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:31:24.128500 1510702 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:31:24.128582 1510702 ssh_runner.go:195] Run: containerd --version
	I1218 01:31:24.161007 1510702 ssh_runner.go:195] Run: containerd --version
	I1218 01:31:24.197662 1510702 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:31:24.200585 1510702 cli_runner.go:164] Run: docker network inspect no-preload-970975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:31:24.219160 1510702 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1218 01:31:24.224763 1510702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:31:24.240767 1510702 kubeadm.go:884] updating cluster {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:31:24.240928 1510702 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:31:24.240986 1510702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:31:24.291945 1510702 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1218 01:31:24.291974 1510702 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1218 01:31:24.292027 1510702 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:24.292230 1510702 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:24.292326 1510702 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:24.292419 1510702 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:24.292508 1510702 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.292593 1510702 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1218 01:31:24.292702 1510702 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:24.292789 1510702 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:24.295672 1510702 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:24.295940 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:24.296090 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.296217 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:24.296348 1510702 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:24.296681 1510702 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1218 01:31:24.296836 1510702 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:24.297020 1510702 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:24.529555 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-rc.1" and sha "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e"
	I1218 01:31:24.529632 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.549220 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" and sha "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a"
	I1218 01:31:24.549295 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:24.559314 1510702 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e" in container runtime
	I1218 01:31:24.559359 1510702 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.559410 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.563689 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.6-0" and sha "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57"
	I1218 01:31:24.563758 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:24.589878 1510702 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a" in container runtime
	I1218 01:31:24.589922 1510702 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:24.589981 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.590057 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.596780 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1218 01:31:24.596908 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:24.606687 1510702 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I1218 01:31:24.606773 1510702 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:24.606853 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.614523 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" and sha "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54"
	I1218 01:31:24.614645 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:24.615886 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" and sha "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde"
	I1218 01:31:24.615985 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:24.620987 1510702 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1218 01:31:24.621106 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1218 01:31:24.694031 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.694124 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:24.701904 1510702 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1218 01:31:24.701995 1510702 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:24.702077 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.702198 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:24.740975 1510702 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54" in container runtime
	I1218 01:31:24.741072 1510702 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:24.741160 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.753559 1510702 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1218 01:31:24.753668 1510702 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1218 01:31:24.753748 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.753892 1510702 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde" in container runtime
	I1218 01:31:24.753944 1510702 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:24.754001 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:24.849906 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:24.849987 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1218 01:31:24.850052 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:24.850102 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:24.850192 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:24.850280 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:24.850334 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1218 01:31:25.076443 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1218 01:31:25.076528 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1218 01:31:25.076584 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1218 01:31:25.076669 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1218 01:31:25.076730 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1218 01:31:25.076787 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:25.076847 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:25.076907 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:25.328522 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1218 01:31:25.328614 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1218 01:31:25.328688 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1218 01:31:25.328751 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1218 01:31:25.328800 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1218 01:31:25.328812 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (22434816 bytes)
	I1218 01:31:25.328861 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I1218 01:31:25.328902 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1218 01:31:25.328956 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1218 01:31:25.329007 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1218 01:31:25.525351 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1218 01:31:25.525502 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1218 01:31:25.525631 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1218 01:31:25.525725 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1218 01:31:25.525803 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1218 01:31:25.525876 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1218 01:31:25.525970 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1218 01:31:25.526012 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (20682752 bytes)
	I1218 01:31:25.526086 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1218 01:31:25.526120 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	I1218 01:31:25.526207 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1218 01:31:25.526281 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1218 01:31:25.601339 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1218 01:31:25.601375 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1218 01:31:25.601437 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1218 01:31:25.601447 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (15416320 bytes)
	I1218 01:31:25.601482 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1218 01:31:25.601491 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (24702976 bytes)
	I1218 01:31:25.601523 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1218 01:31:25.601531 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1218 01:31:25.627432 1510702 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1218 01:31:25.627603 1510702 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1218 01:31:25.627690 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:25.853131 1510702 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1218 01:31:25.853181 1510702 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:25.853235 1510702 ssh_runner.go:195] Run: which crictl
	I1218 01:31:25.900013 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1218 01:31:25.900124 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1218 01:31:26.027024 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:26.330136 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1218 01:31:26.330419 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:26.473675 1510702 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:31:26.572445 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1218 01:31:26.572561 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1218 01:31:26.606023 1510702 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1218 01:31:26.606595 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1218 01:31:28.454805 1510702 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.848178069s)
	I1218 01:31:28.454846 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1218 01:31:28.454874 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1218 01:31:28.454974 1510702 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.882393175s)
	I1218 01:31:28.454991 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1218 01:31:28.455008 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1218 01:31:28.455047 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1218 01:31:29.724729 1510702 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.269653492s)
	I1218 01:31:29.724814 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1218 01:31:29.724856 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1218 01:31:29.724934 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1218 01:31:30.807445 1510702 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.082470322s)
	I1218 01:31:30.807474 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1218 01:31:30.807497 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1218 01:31:30.807544 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0
	I1218 01:31:32.303304 1510702 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.6-0: (1.495732922s)
	I1218 01:31:32.303328 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1218 01:31:32.303346 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1218 01:31:32.303396 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1218 01:31:33.346173 1510702 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.042753003s)
	I1218 01:31:33.346199 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1218 01:31:33.346225 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1218 01:31:33.346275 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1218 01:31:34.500438 1510702 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.154130202s)
	I1218 01:31:34.500469 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1218 01:31:34.500506 1510702 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1218 01:31:34.500595 1510702 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1218 01:31:34.896921 1510702 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1218 01:31:34.896960 1510702 cache_images.go:125] Successfully loaded all cached images
	I1218 01:31:34.896966 1510702 cache_images.go:94] duration metric: took 10.604979432s to LoadCachedImages
	I1218 01:31:34.896978 1510702 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:31:34.897091 1510702 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-970975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:31:34.897164 1510702 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:31:34.925494 1510702 cni.go:84] Creating CNI manager for ""
	I1218 01:31:34.925524 1510702 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:31:34.925541 1510702 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:31:34.925563 1510702 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970975 NodeName:no-preload-970975 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:31:34.925709 1510702 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-970975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:31:34.925784 1510702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:31:34.934527 1510702 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1218 01:31:34.934602 1510702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:31:34.943031 1510702 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubectl.sha256
	I1218 01:31:34.943133 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1218 01:31:34.943732 1510702 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm
	I1218 01:31:34.943988 1510702 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet
	I1218 01:31:34.947894 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1218 01:31:34.947934 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (55247032 bytes)
	I1218 01:31:35.890150 1510702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:31:35.914442 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1218 01:31:35.919408 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1218 01:31:35.919445 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (54329636 bytes)
	I1218 01:31:36.035508 1510702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1218 01:31:36.062656 1510702 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1218 01:31:36.062757 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (68354232 bytes)
	I1218 01:31:36.594032 1510702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:31:36.603047 1510702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:31:36.617711 1510702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:31:36.632367 1510702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 01:31:36.647208 1510702 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:31:36.650772 1510702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:31:36.660744 1510702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:31:36.785944 1510702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:31:36.804310 1510702 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975 for IP: 192.168.76.2
	I1218 01:31:36.804387 1510702 certs.go:195] generating shared ca certs ...
	I1218 01:31:36.804427 1510702 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:36.804674 1510702 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:31:36.804762 1510702 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:31:36.804785 1510702 certs.go:257] generating profile certs ...
	I1218 01:31:36.804879 1510702 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.key
	I1218 01:31:36.804917 1510702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt with IP's: []
	I1218 01:31:36.876514 1510702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt ...
	I1218 01:31:36.876551 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: {Name:mk70a4eda8545fd14ebd64d0078760a5d96b21dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:36.876865 1510702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.key ...
	I1218 01:31:36.876884 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.key: {Name:mk88edb3d911d27b2555b40649172f411fe5269e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:36.877040 1510702 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb
	I1218 01:31:36.877063 1510702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt.4df284eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1218 01:31:37.184145 1510702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt.4df284eb ...
	I1218 01:31:37.184176 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt.4df284eb: {Name:mkefd6a0b6c5af7fe428b1cbc8abaedec34c532d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:37.184365 1510702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb ...
	I1218 01:31:37.184380 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb: {Name:mk2bc13c1fa372c3ad706d18cc5ccf5014df1a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:37.184465 1510702 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt.4df284eb -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt
	I1218 01:31:37.184548 1510702 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key
	I1218 01:31:37.184609 1510702 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key
	I1218 01:31:37.184644 1510702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt with IP's: []
	I1218 01:31:37.322814 1510702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt ...
	I1218 01:31:37.322846 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt: {Name:mk99182bd443170b13ce0ea125caba08b4ae4b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:37.323025 1510702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key ...
	I1218 01:31:37.323039 1510702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key: {Name:mk317518845535abf53475453db0ac9e10f3f756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:31:37.323268 1510702 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:31:37.323316 1510702 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:31:37.323330 1510702 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:31:37.323356 1510702 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:31:37.323386 1510702 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:31:37.323415 1510702 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:31:37.323466 1510702 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:31:37.324054 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:31:37.344872 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:31:37.363656 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:31:37.382039 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:31:37.400971 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:31:37.419408 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 01:31:37.437437 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:31:37.455799 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 01:31:37.474585 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:31:37.493314 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:31:37.511590 1510702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:31:37.529791 1510702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:31:37.542513 1510702 ssh_runner.go:195] Run: openssl version
	I1218 01:31:37.548943 1510702 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:31:37.556568 1510702 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:31:37.564309 1510702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:31:37.568105 1510702 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:31:37.568198 1510702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:31:37.611107 1510702 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:31:37.618767 1510702 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:31:37.626887 1510702 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:31:37.634700 1510702 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:31:37.644201 1510702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:31:37.649193 1510702 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:31:37.649318 1510702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:31:37.692311 1510702 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:31:37.704694 1510702 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:31:37.713836 1510702 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:31:37.722001 1510702 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:31:37.729762 1510702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:31:37.734009 1510702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:31:37.734106 1510702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:31:37.775152 1510702 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:31:37.782801 1510702 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:31:37.790408 1510702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:31:37.794163 1510702 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:31:37.794219 1510702 kubeadm.go:401] StartCluster: {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:31:37.794295 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:31:37.794353 1510702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:31:37.820443 1510702 cri.go:89] found id: ""
	I1218 01:31:37.820560 1510702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:31:37.832288 1510702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:31:37.841645 1510702 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:31:37.841711 1510702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:31:37.849743 1510702 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:31:37.849781 1510702 kubeadm.go:158] found existing configuration files:
	
	I1218 01:31:37.849833 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:31:37.857783 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:31:37.857897 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:31:37.865116 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:31:37.873284 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:31:37.873401 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:31:37.880971 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:31:37.889069 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:31:37.889146 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:31:37.896820 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:31:37.906895 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:31:37.906993 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:31:37.915443 1510702 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:31:37.958190 1510702 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:31:37.958572 1510702 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:31:38.043130 1510702 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:31:38.043259 1510702 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:31:38.043312 1510702 kubeadm.go:319] OS: Linux
	I1218 01:31:38.043373 1510702 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:31:38.043434 1510702 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:31:38.043519 1510702 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:31:38.043577 1510702 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:31:38.043645 1510702 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:31:38.043713 1510702 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:31:38.043772 1510702 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:31:38.043828 1510702 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:31:38.043880 1510702 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:31:38.120830 1510702 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:31:38.121080 1510702 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:31:38.121224 1510702 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:31:38.128024 1510702 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:31:38.136221 1510702 out.go:252]   - Generating certificates and keys ...
	I1218 01:31:38.136392 1510702 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:31:38.136492 1510702 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:31:38.398459 1510702 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:31:38.668272 1510702 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:31:38.760172 1510702 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:31:38.962353 1510702 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:31:39.031278 1510702 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:31:39.031648 1510702 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-970975] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1218 01:31:39.185657 1510702 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:31:39.185962 1510702 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-970975] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1218 01:31:39.413068 1510702 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:31:39.572300 1510702 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:31:40.554266 1510702 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:31:40.554339 1510702 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:31:41.026587 1510702 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:31:41.303996 1510702 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:31:41.719971 1510702 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:31:41.973746 1510702 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:31:42.082063 1510702 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:31:42.090679 1510702 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:31:42.090768 1510702 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:31:42.094393 1510702 out.go:252]   - Booting up control plane ...
	I1218 01:31:42.094514 1510702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:31:42.094593 1510702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:31:42.094662 1510702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:31:42.150995 1510702 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:31:42.151112 1510702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:31:42.152732 1510702 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:31:42.153788 1510702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:31:42.154259 1510702 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:31:42.397630 1510702 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:31:42.397755 1510702 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:35:42.397538 1510702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00125991s
	I1218 01:35:42.401178 1510702 kubeadm.go:319] 
	I1218 01:35:42.401252 1510702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:35:42.401287 1510702 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:35:42.401392 1510702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:35:42.401398 1510702 kubeadm.go:319] 
	I1218 01:35:42.401513 1510702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:35:42.401545 1510702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:35:42.401577 1510702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:35:42.401581 1510702 kubeadm.go:319] 
	I1218 01:35:42.407273 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:35:42.407701 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:35:42.407810 1510702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:35:42.408072 1510702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 01:35:42.408077 1510702 kubeadm.go:319] 
	I1218 01:35:42.408145 1510702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1218 01:35:42.408251 1510702 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-970975] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-970975] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00125991s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-970975] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-970975] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00125991s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 01:35:42.408332 1510702 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 01:35:42.911731 1510702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:35:42.933072 1510702 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:35:42.933139 1510702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:35:42.947287 1510702 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:35:42.947360 1510702 kubeadm.go:158] found existing configuration files:
	
	I1218 01:35:42.947446 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:35:42.957986 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:35:42.958044 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:35:42.965856 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:35:42.979107 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:35:42.979169 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:35:42.988823 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:35:43.000275 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:35:43.000350 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:35:43.013988 1510702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:35:43.027475 1510702 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:35:43.027541 1510702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:35:43.039896 1510702 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:35:43.114667 1510702 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:35:43.116724 1510702 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:35:43.234979 1510702 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:35:43.235050 1510702 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:35:43.235085 1510702 kubeadm.go:319] OS: Linux
	I1218 01:35:43.235135 1510702 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:35:43.235183 1510702 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:35:43.235230 1510702 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:35:43.235277 1510702 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:35:43.235325 1510702 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:35:43.235373 1510702 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:35:43.235418 1510702 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:35:43.235465 1510702 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:35:43.235511 1510702 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:35:43.341240 1510702 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:35:43.341350 1510702 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:35:43.341441 1510702 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:35:43.366691 1510702 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:35:43.370674 1510702 out.go:252]   - Generating certificates and keys ...
	I1218 01:35:43.370777 1510702 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:35:43.370853 1510702 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:35:43.370937 1510702 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 01:35:43.371003 1510702 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 01:35:43.371598 1510702 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 01:35:43.372163 1510702 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 01:35:43.372743 1510702 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 01:35:43.373153 1510702 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 01:35:43.373589 1510702 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 01:35:43.374127 1510702 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 01:35:43.374445 1510702 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 01:35:43.374527 1510702 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:35:43.867994 1510702 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:35:43.936264 1510702 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:35:44.256684 1510702 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:35:44.403356 1510702 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:35:44.489623 1510702 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:35:44.490431 1510702 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:35:44.493158 1510702 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:35:44.496552 1510702 out.go:252]   - Booting up control plane ...
	I1218 01:35:44.496675 1510702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:35:44.496766 1510702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:35:44.497748 1510702 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:35:44.525024 1510702 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:35:44.525138 1510702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:35:44.538428 1510702 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:35:44.538533 1510702 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:35:44.538577 1510702 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:35:44.774815 1510702 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:35:44.778716 1510702 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:39:44.779437 1510702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001112678s
	I1218 01:39:44.779500 1510702 kubeadm.go:319] 
	I1218 01:39:44.779569 1510702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:39:44.779604 1510702 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:39:44.779726 1510702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:39:44.779736 1510702 kubeadm.go:319] 
	I1218 01:39:44.779894 1510702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:39:44.779933 1510702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:39:44.779971 1510702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:39:44.779981 1510702 kubeadm.go:319] 
	I1218 01:39:44.784423 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:39:44.784877 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:39:44.784990 1510702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:39:44.785228 1510702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:39:44.785237 1510702 kubeadm.go:319] 
	I1218 01:39:44.785307 1510702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:39:44.785368 1510702 kubeadm.go:403] duration metric: took 8m6.991155077s to StartCluster
	I1218 01:39:44.785429 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:39:44.785502 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:39:44.810447 1510702 cri.go:89] found id: ""
	I1218 01:39:44.810472 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.810482 1510702 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:39:44.810488 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:39:44.810555 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:39:44.839406 1510702 cri.go:89] found id: ""
	I1218 01:39:44.839434 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.839443 1510702 logs.go:284] No container was found matching "etcd"
	I1218 01:39:44.839450 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:39:44.839511 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:39:44.868069 1510702 cri.go:89] found id: ""
	I1218 01:39:44.868096 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.868105 1510702 logs.go:284] No container was found matching "coredns"
	I1218 01:39:44.868111 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:39:44.868169 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:39:44.895127 1510702 cri.go:89] found id: ""
	I1218 01:39:44.895154 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.895163 1510702 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:39:44.895170 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:39:44.895229 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:39:44.922045 1510702 cri.go:89] found id: ""
	I1218 01:39:44.922067 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.922075 1510702 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:39:44.922081 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:39:44.922141 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:39:44.947348 1510702 cri.go:89] found id: ""
	I1218 01:39:44.947371 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.947380 1510702 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:39:44.947386 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:39:44.947445 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:39:44.974747 1510702 cri.go:89] found id: ""
	I1218 01:39:44.974817 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.974841 1510702 logs.go:284] No container was found matching "kindnet"
	I1218 01:39:44.974872 1510702 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:39:44.974904 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:39:45.158574 1510702 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:39:45.158593 1510702 logs.go:123] Gathering logs for containerd ...
	I1218 01:39:45.158606 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:39:45.231899 1510702 logs.go:123] Gathering logs for container status ...
	I1218 01:39:45.231984 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:39:45.274173 1510702 logs.go:123] Gathering logs for kubelet ...
	I1218 01:39:45.274204 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:39:45.347906 1510702 logs.go:123] Gathering logs for dmesg ...
	I1218 01:39:45.347946 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:39:45.367741 1510702 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:39:45.367789 1510702 out.go:285] * 
	* 
	W1218 01:39:45.367853 1510702 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.367874 1510702 out.go:285] * 
	* 
	W1218 01:39:45.370057 1510702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:39:45.374979 1510702 out.go:203] 
	W1218 01:39:45.378669 1510702 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.378761 1510702 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:39:45.378790 1510702 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:39:45.381944 1510702 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1511022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:31:17.16290886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1e9bc76dbd04c46d3398cadb3276424663a2b675616e94f670f35547ef4442d",
	            "SandboxKey": "/var/run/docker/netns/e1e9bc76dbd0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:4c:f1:db:47:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "a42f74c81af72816a5096acec3153b345a82e549e666df17a9cd4661c0bfa55d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 6 (361.764557ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:39:45.899832 1539592 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ unpause │ -p old-k8s-version-207212 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-922343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:33 UTC │
	│ stop    │ -p embed-certs-922343 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:35 UTC │
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:37:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:37:41.409265 1535974 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:37:41.409621 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409656 1535974 out.go:374] Setting ErrFile to fd 2...
	I1218 01:37:41.409674 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409955 1535974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:37:41.410413 1535974 out.go:368] Setting JSON to false
	I1218 01:37:41.411299 1535974 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30008,"bootTime":1765991854,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:37:41.411395 1535974 start.go:143] virtualization:  
	I1218 01:37:41.415580 1535974 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:37:41.419867 1535974 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:37:41.419945 1535974 notify.go:221] Checking for updates...
	I1218 01:37:41.426287 1535974 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:37:41.429432 1535974 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:37:41.433605 1535974 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:37:41.436760 1535974 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:37:41.439743 1535974 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:37:41.443485 1535974 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:41.443626 1535974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:37:41.476508 1535974 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:37:41.476682 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.529692 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.519945478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.529801 1535974 docker.go:319] overlay module found
	I1218 01:37:41.533160 1535974 out.go:179] * Using the docker driver based on user configuration
	I1218 01:37:41.536049 1535974 start.go:309] selected driver: docker
	I1218 01:37:41.536071 1535974 start.go:927] validating driver "docker" against <nil>
	I1218 01:37:41.536087 1535974 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:37:41.536903 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.594960 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.586076136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.595118 1535974 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1218 01:37:41.595153 1535974 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1218 01:37:41.595385 1535974 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:37:41.598327 1535974 out.go:179] * Using Docker driver with root privileges
	I1218 01:37:41.601257 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:41.601333 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:41.601345 1535974 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:37:41.601426 1535974 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:41.606414 1535974 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:37:41.609305 1535974 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:37:41.612198 1535974 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:37:41.615045 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:41.615091 1535974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:37:41.615104 1535974 cache.go:65] Caching tarball of preloaded images
	I1218 01:37:41.615136 1535974 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:37:41.615184 1535974 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:37:41.615194 1535974 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:37:41.615294 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:41.615311 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json: {Name:mk1c21bf1c938626eee4c23c85b81bbb6255d680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:41.634234 1535974 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:37:41.634258 1535974 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:37:41.634273 1535974 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:37:41.634304 1535974 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:37:41.634418 1535974 start.go:364] duration metric: took 93.52µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:37:41.634450 1535974 start.go:93] Provisioning new machine with config: &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:37:41.634560 1535974 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:37:41.638056 1535974 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:37:41.638295 1535974 start.go:159] libmachine.API.Create for "newest-cni-120615" (driver="docker")
	I1218 01:37:41.638333 1535974 client.go:173] LocalClient.Create starting
	I1218 01:37:41.638412 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:37:41.638450 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638466 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638528 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:37:41.638549 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638564 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638936 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:37:41.659766 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:37:41.659848 1535974 network_create.go:284] running [docker network inspect newest-cni-120615] to gather additional debugging logs...
	I1218 01:37:41.659883 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615
	W1218 01:37:41.680710 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 returned with exit code 1
	I1218 01:37:41.680751 1535974 network_create.go:287] error running [docker network inspect newest-cni-120615]: docker network inspect newest-cni-120615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-120615 not found
	I1218 01:37:41.680768 1535974 network_create.go:289] output of [docker network inspect newest-cni-120615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-120615 not found
	
	** /stderr **
	I1218 01:37:41.680867 1535974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:41.697958 1535974 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:37:41.698338 1535974 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:37:41.698559 1535974 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:37:41.698831 1535974 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 01:37:41.699243 1535974 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001983860}
	I1218 01:37:41.699261 1535974 network_create.go:124] attempt to create docker network newest-cni-120615 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 01:37:41.699323 1535974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-120615 newest-cni-120615
	I1218 01:37:41.764110 1535974 network_create.go:108] docker network newest-cni-120615 192.168.85.0/24 created
	I1218 01:37:41.764138 1535974 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-120615" container
	I1218 01:37:41.764211 1535974 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:37:41.780305 1535974 cli_runner.go:164] Run: docker volume create newest-cni-120615 --label name.minikube.sigs.k8s.io=newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:37:41.798478 1535974 oci.go:103] Successfully created a docker volume newest-cni-120615
	I1218 01:37:41.798584 1535974 cli_runner.go:164] Run: docker run --rm --name newest-cni-120615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --entrypoint /usr/bin/test -v newest-cni-120615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:37:42.380541 1535974 oci.go:107] Successfully prepared a docker volume newest-cni-120615
	I1218 01:37:42.380617 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:42.380663 1535974 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 01:37:42.380737 1535974 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 01:37:46.199794 1535974 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.819017615s)
	I1218 01:37:46.199835 1535974 kic.go:203] duration metric: took 3.819169809s to extract preloaded images to volume ...
	W1218 01:37:46.199963 1535974 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:37:46.200068 1535974 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:37:46.253384 1535974 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-120615 --name newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-120615 --network newest-cni-120615 --ip 192.168.85.2 --volume newest-cni-120615:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:37:46.551881 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Running}}
	I1218 01:37:46.583903 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.608169 1535974 cli_runner.go:164] Run: docker exec newest-cni-120615 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:37:46.667666 1535974 oci.go:144] the created container "newest-cni-120615" has a running status.
	I1218 01:37:46.667692 1535974 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa...
	I1218 01:37:46.834539 1535974 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:37:46.861844 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.884882 1535974 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:37:46.884908 1535974 kic_runner.go:114] Args: [docker exec --privileged newest-cni-120615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:37:46.942854 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.960511 1535974 machine.go:94] provisionDockerMachine start ...
	I1218 01:37:46.960612 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:46.978530 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:46.978859 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:46.978868 1535974 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:37:46.979490 1535974 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:37:50.148337 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.148363 1535974 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:37:50.148435 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.165796 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.166115 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.166132 1535974 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:37:50.330955 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.331106 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.348111 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.348435 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.348452 1535974 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:37:50.500688 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:37:50.500716 1535974 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:37:50.500744 1535974 ubuntu.go:190] setting up certificates
	I1218 01:37:50.500754 1535974 provision.go:84] configureAuth start
	I1218 01:37:50.500821 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:50.517589 1535974 provision.go:143] copyHostCerts
	I1218 01:37:50.517666 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:37:50.517680 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:37:50.517755 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:37:50.517871 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:37:50.517882 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:37:50.517912 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:37:50.517969 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:37:50.517977 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:37:50.518002 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:37:50.518054 1535974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:37:50.674888 1535974 provision.go:177] copyRemoteCerts
	I1218 01:37:50.674959 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:37:50.675009 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.693570 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.800638 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:37:50.818412 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:37:50.836171 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:37:50.853859 1535974 provision.go:87] duration metric: took 353.089827ms to configureAuth
	I1218 01:37:50.853884 1535974 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:37:50.854091 1535974 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:50.854099 1535974 machine.go:97] duration metric: took 3.893564907s to provisionDockerMachine
	I1218 01:37:50.854106 1535974 client.go:176] duration metric: took 9.215762234s to LocalClient.Create
	I1218 01:37:50.854131 1535974 start.go:167] duration metric: took 9.215836644s to libmachine.API.Create "newest-cni-120615"
	I1218 01:37:50.854140 1535974 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:37:50.854151 1535974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:37:50.854199 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:37:50.854246 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.871379 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.976751 1535974 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:37:50.979800 1535974 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:37:50.979835 1535974 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:37:50.979846 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:37:50.979919 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:37:50.980017 1535974 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:37:50.980118 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:37:50.987435 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:51.010927 1535974 start.go:296] duration metric: took 156.770961ms for postStartSetup
	I1218 01:37:51.011358 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.028989 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:51.029275 1535974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:37:51.029337 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.046033 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.149901 1535974 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:37:51.154841 1535974 start.go:128] duration metric: took 9.520265624s to createHost
	I1218 01:37:51.154870 1535974 start.go:83] releasing machines lock for "newest-cni-120615", held for 9.520437574s
	I1218 01:37:51.154941 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.172452 1535974 ssh_runner.go:195] Run: cat /version.json
	I1218 01:37:51.172506 1535974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:37:51.172521 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.172564 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.192456 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.195325 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.384735 1535974 ssh_runner.go:195] Run: systemctl --version
	I1218 01:37:51.391571 1535974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:37:51.396317 1535974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:37:51.396387 1535974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:37:51.426976 1535974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:37:51.427002 1535974 start.go:496] detecting cgroup driver to use...
	I1218 01:37:51.427045 1535974 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:37:51.427094 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:37:51.443517 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:37:51.461122 1535974 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:37:51.461182 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:37:51.478844 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:37:51.497057 1535974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:37:51.618030 1535974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:37:51.746908 1535974 docker.go:234] disabling docker service ...
	I1218 01:37:51.747041 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:37:51.768317 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:37:51.781980 1535974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:37:51.904322 1535974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:37:52.052799 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:37:52.066888 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:37:52.082976 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:37:52.093587 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:37:52.102930 1535974 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:37:52.103042 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:37:52.112246 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.121385 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:37:52.130577 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.139689 1535974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:37:52.149904 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:37:52.159110 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:37:52.168101 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:37:52.177205 1535974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:37:52.185241 1535974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:37:52.193080 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.308369 1535974 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:37:52.450163 1535974 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:37:52.450242 1535974 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:37:52.454206 1535974 start.go:564] Will wait 60s for crictl version
	I1218 01:37:52.454330 1535974 ssh_runner.go:195] Run: which crictl
	I1218 01:37:52.457885 1535974 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:37:52.482102 1535974 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:37:52.482223 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.502684 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.526110 1535974 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:37:52.529020 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:52.546624 1535974 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:37:52.550634 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.563708 1535974 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:37:52.566648 1535974 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:37:52.566803 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:52.566895 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.591897 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.591927 1535974 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:37:52.592017 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.621212 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.621242 1535974 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:37:52.621251 1535974 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:37:52.621346 1535974 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:37:52.621421 1535974 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:37:52.651981 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:52.652006 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:52.652029 1535974 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:37:52.652053 1535974 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:37:52.652168 1535974 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:37:52.652238 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:37:52.659908 1535974 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:37:52.660006 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:37:52.667532 1535974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:37:52.680138 1535974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:37:52.693473 1535974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:37:52.706791 1535974 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:37:52.710393 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.719930 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.838696 1535974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:37:52.855521 1535974 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:37:52.855591 1535974 certs.go:195] generating shared ca certs ...
	I1218 01:37:52.855623 1535974 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.855818 1535974 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:37:52.855904 1535974 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:37:52.855930 1535974 certs.go:257] generating profile certs ...
	I1218 01:37:52.856023 1535974 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:37:52.856067 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt with IP's: []
	I1218 01:37:52.959822 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt ...
	I1218 01:37:52.959911 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt: {Name:mk1478bd753bc1bd23e013e8b566fd65e1f2e1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960142 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key ...
	I1218 01:37:52.960182 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key: {Name:mk3ecbc7ec855c1ebb5deefb951affdfc3f90c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960334 1535974 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:37:52.960379 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 01:37:53.073797 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 ...
	I1218 01:37:53.073831 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056: {Name:mkbff084b54b98d69b985b5f1bd631cb072aabd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074057 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 ...
	I1218 01:37:53.074074 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056: {Name:mkb73e5093692957aa43e022ccaed162c1426b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074169 1535974 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt
	I1218 01:37:53.074248 1535974 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key
	I1218 01:37:53.074307 1535974 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:37:53.074329 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt with IP's: []
	I1218 01:37:53.314103 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt ...
	I1218 01:37:53.314136 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt: {Name:mk54950f9214da12e2d9ae5c67b648894886fbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314331 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key ...
	I1218 01:37:53.314345 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key: {Name:mk2d7b01164454a2df40dfec571544f9e3d23770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314570 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:37:53.314621 1535974 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:37:53.314635 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:37:53.314664 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:37:53.314694 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:37:53.314721 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:37:53.314772 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:53.315353 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:37:53.334028 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:37:53.352910 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:37:53.371116 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:37:53.388896 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:37:53.407154 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:37:53.424768 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:37:53.442432 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:37:53.459693 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:37:53.477104 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:37:53.494473 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:37:53.511694 1535974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:37:53.524605 1535974 ssh_runner.go:195] Run: openssl version
	I1218 01:37:53.531162 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.539159 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:37:53.547088 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550792 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550872 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.592275 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.599906 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.607314 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.614880 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:37:53.622354 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626261 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626329 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.673215 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:37:53.682819 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:37:53.692004 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.703568 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:37:53.718183 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726247 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726314 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.769713 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:37:53.777194 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:37:53.784995 1535974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:37:53.788744 1535974 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:37:53.788807 1535974 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:53.788935 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:37:53.788995 1535974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:37:53.815984 1535974 cri.go:89] found id: ""
	I1218 01:37:53.816075 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:37:53.824897 1535974 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:37:53.834778 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:37:53.834915 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:37:53.843777 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:37:53.843797 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:37:53.843886 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:37:53.851665 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:37:53.851766 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:37:53.859225 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:37:53.867081 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:37:53.867187 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:37:53.874504 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.882220 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:37:53.882286 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.889970 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:37:53.897334 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:37:53.897401 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:37:53.904593 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:37:53.944551 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:37:53.944611 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:37:54.027408 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:37:54.027490 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:37:54.027530 1535974 kubeadm.go:319] OS: Linux
	I1218 01:37:54.027581 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:37:54.027632 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:37:54.027693 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:37:54.027752 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:37:54.027803 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:37:54.027862 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:37:54.027912 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:37:54.027964 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:37:54.028012 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:37:54.097877 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:37:54.097993 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:37:54.098097 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:37:54.105071 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:37:54.111500 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:37:54.111603 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:37:54.111672 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:37:54.530590 1535974 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:37:54.977111 1535974 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:37:55.271802 1535974 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:37:55.800100 1535974 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:37:55.973303 1535974 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:37:55.974317 1535974 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.183207 1535974 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:37:56.183548 1535974 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.263322 1535974 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:37:56.663315 1535974 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:37:56.917852 1535974 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:37:56.918300 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:37:57.144859 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:37:57.575780 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:37:57.878713 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:37:58.333388 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:37:58.732682 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:37:58.733416 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:37:58.737417 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:37:58.741102 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:37:58.741209 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:37:58.741290 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:37:58.741882 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:37:58.757974 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:37:58.758530 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:37:58.766133 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:37:58.766550 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:37:58.766761 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:37:58.901026 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:37:58.901158 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:39:44.779437 1510702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001112678s
	I1218 01:39:44.779500 1510702 kubeadm.go:319] 
	I1218 01:39:44.779569 1510702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:39:44.779604 1510702 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:39:44.779726 1510702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:39:44.779736 1510702 kubeadm.go:319] 
	I1218 01:39:44.779894 1510702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:39:44.779933 1510702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:39:44.779971 1510702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:39:44.779981 1510702 kubeadm.go:319] 
	I1218 01:39:44.784423 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:39:44.784877 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:39:44.784990 1510702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:39:44.785228 1510702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:39:44.785237 1510702 kubeadm.go:319] 
	I1218 01:39:44.785307 1510702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:39:44.785368 1510702 kubeadm.go:403] duration metric: took 8m6.991155077s to StartCluster
	I1218 01:39:44.785429 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:39:44.785502 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:39:44.810447 1510702 cri.go:89] found id: ""
	I1218 01:39:44.810472 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.810482 1510702 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:39:44.810488 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:39:44.810555 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:39:44.839406 1510702 cri.go:89] found id: ""
	I1218 01:39:44.839434 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.839443 1510702 logs.go:284] No container was found matching "etcd"
	I1218 01:39:44.839450 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:39:44.839511 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:39:44.868069 1510702 cri.go:89] found id: ""
	I1218 01:39:44.868096 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.868105 1510702 logs.go:284] No container was found matching "coredns"
	I1218 01:39:44.868111 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:39:44.868169 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:39:44.895127 1510702 cri.go:89] found id: ""
	I1218 01:39:44.895154 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.895163 1510702 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:39:44.895170 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:39:44.895229 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:39:44.922045 1510702 cri.go:89] found id: ""
	I1218 01:39:44.922067 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.922075 1510702 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:39:44.922081 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:39:44.922141 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:39:44.947348 1510702 cri.go:89] found id: ""
	I1218 01:39:44.947371 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.947380 1510702 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:39:44.947386 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:39:44.947445 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:39:44.974747 1510702 cri.go:89] found id: ""
	I1218 01:39:44.974817 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.974841 1510702 logs.go:284] No container was found matching "kindnet"
	I1218 01:39:44.974872 1510702 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:39:44.974904 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:39:45.158574 1510702 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:39:45.158593 1510702 logs.go:123] Gathering logs for containerd ...
	I1218 01:39:45.158606 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:39:45.231899 1510702 logs.go:123] Gathering logs for container status ...
	I1218 01:39:45.231984 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:39:45.274173 1510702 logs.go:123] Gathering logs for kubelet ...
	I1218 01:39:45.274204 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:39:45.347906 1510702 logs.go:123] Gathering logs for dmesg ...
	I1218 01:39:45.347946 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:39:45.367741 1510702 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:39:45.367789 1510702 out.go:285] * 
	W1218 01:39:45.367853 1510702 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.367874 1510702 out.go:285] * 
	W1218 01:39:45.370057 1510702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:39:45.374979 1510702 out.go:203] 
	W1218 01:39:45.378669 1510702 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.378761 1510702 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:39:45.378790 1510702 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:39:45.381944 1510702 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:31:28 no-preload-970975 containerd[759]: time="2025-12-18T01:31:28.470947504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.711596763Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.713869317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.723846633Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.727456559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.796825379Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.799106228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.807433713Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.808922925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.292875381Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.295130606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.303984182Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.305000224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.336005639Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.338266928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.348579276Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.349580951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.488112742Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.491177326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.502169199Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.503038028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.888978136Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.891655209Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901388576Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901784046Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:46.518560    5561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:46.519142    5561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:46.521079    5561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:46.521838    5561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:46.523065    5561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:39:46 up  8:22,  0 user,  load average: 1.47, 2.06, 2.16
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:39:43 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:43 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 18 01:39:43 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:43 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:43 no-preload-970975 kubelet[5367]: E1218 01:39:43.953826    5367 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:43 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:43 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:44 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 18 01:39:44 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:44 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:44 no-preload-970975 kubelet[5373]: E1218 01:39:44.705039    5373 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:44 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:44 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:45 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 18 01:39:45 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:45 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:45 no-preload-970975 kubelet[5457]: E1218 01:39:45.535642    5457 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:45 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:45 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 kubelet[5479]: E1218 01:39:46.210063    5479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 6 (427.56628ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:39:47.044825 1539810 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (511.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (501.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1218 01:37:57.379376 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:38:05.209348 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:38:25.214780 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:39:27.130905 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m20.071503515s)

                                                
                                                
-- stdout --
	* [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:37:41.409265 1535974 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:37:41.409621 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409656 1535974 out.go:374] Setting ErrFile to fd 2...
	I1218 01:37:41.409674 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409955 1535974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:37:41.410413 1535974 out.go:368] Setting JSON to false
	I1218 01:37:41.411299 1535974 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30008,"bootTime":1765991854,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:37:41.411395 1535974 start.go:143] virtualization:  
	I1218 01:37:41.415580 1535974 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:37:41.419867 1535974 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:37:41.419945 1535974 notify.go:221] Checking for updates...
	I1218 01:37:41.426287 1535974 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:37:41.429432 1535974 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:37:41.433605 1535974 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:37:41.436760 1535974 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:37:41.439743 1535974 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:37:41.443485 1535974 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:41.443626 1535974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:37:41.476508 1535974 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:37:41.476682 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.529692 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.519945478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.529801 1535974 docker.go:319] overlay module found
	I1218 01:37:41.533160 1535974 out.go:179] * Using the docker driver based on user configuration
	I1218 01:37:41.536049 1535974 start.go:309] selected driver: docker
	I1218 01:37:41.536071 1535974 start.go:927] validating driver "docker" against <nil>
	I1218 01:37:41.536087 1535974 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:37:41.536903 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.594960 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.586076136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.595118 1535974 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1218 01:37:41.595153 1535974 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1218 01:37:41.595385 1535974 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:37:41.598327 1535974 out.go:179] * Using Docker driver with root privileges
	I1218 01:37:41.601257 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:41.601333 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:41.601345 1535974 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:37:41.601426 1535974 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:41.606414 1535974 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:37:41.609305 1535974 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:37:41.612198 1535974 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:37:41.615045 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:41.615091 1535974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:37:41.615104 1535974 cache.go:65] Caching tarball of preloaded images
	I1218 01:37:41.615136 1535974 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:37:41.615184 1535974 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:37:41.615194 1535974 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:37:41.615294 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:41.615311 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json: {Name:mk1c21bf1c938626eee4c23c85b81bbb6255d680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:41.634234 1535974 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:37:41.634258 1535974 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:37:41.634273 1535974 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:37:41.634304 1535974 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:37:41.634418 1535974 start.go:364] duration metric: took 93.52µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:37:41.634450 1535974 start.go:93] Provisioning new machine with config: &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:37:41.634560 1535974 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:37:41.638056 1535974 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:37:41.638295 1535974 start.go:159] libmachine.API.Create for "newest-cni-120615" (driver="docker")
	I1218 01:37:41.638333 1535974 client.go:173] LocalClient.Create starting
	I1218 01:37:41.638412 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:37:41.638450 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638466 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638528 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:37:41.638549 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638564 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638936 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:37:41.659766 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:37:41.659848 1535974 network_create.go:284] running [docker network inspect newest-cni-120615] to gather additional debugging logs...
	I1218 01:37:41.659883 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615
	W1218 01:37:41.680710 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 returned with exit code 1
	I1218 01:37:41.680751 1535974 network_create.go:287] error running [docker network inspect newest-cni-120615]: docker network inspect newest-cni-120615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-120615 not found
	I1218 01:37:41.680768 1535974 network_create.go:289] output of [docker network inspect newest-cni-120615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-120615 not found
	
	** /stderr **
	I1218 01:37:41.680867 1535974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:41.697958 1535974 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:37:41.698338 1535974 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:37:41.698559 1535974 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:37:41.698831 1535974 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 01:37:41.699243 1535974 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001983860}
	I1218 01:37:41.699261 1535974 network_create.go:124] attempt to create docker network newest-cni-120615 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 01:37:41.699323 1535974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-120615 newest-cni-120615
	I1218 01:37:41.764110 1535974 network_create.go:108] docker network newest-cni-120615 192.168.85.0/24 created
	I1218 01:37:41.764138 1535974 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-120615" container
	I1218 01:37:41.764211 1535974 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:37:41.780305 1535974 cli_runner.go:164] Run: docker volume create newest-cni-120615 --label name.minikube.sigs.k8s.io=newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:37:41.798478 1535974 oci.go:103] Successfully created a docker volume newest-cni-120615
	I1218 01:37:41.798584 1535974 cli_runner.go:164] Run: docker run --rm --name newest-cni-120615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --entrypoint /usr/bin/test -v newest-cni-120615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:37:42.380541 1535974 oci.go:107] Successfully prepared a docker volume newest-cni-120615
	I1218 01:37:42.380617 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:42.380663 1535974 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 01:37:42.380737 1535974 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 01:37:46.199794 1535974 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.819017615s)
	I1218 01:37:46.199835 1535974 kic.go:203] duration metric: took 3.819169809s to extract preloaded images to volume ...
	W1218 01:37:46.199963 1535974 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:37:46.200068 1535974 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:37:46.253384 1535974 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-120615 --name newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-120615 --network newest-cni-120615 --ip 192.168.85.2 --volume newest-cni-120615:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:37:46.551881 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Running}}
	I1218 01:37:46.583903 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.608169 1535974 cli_runner.go:164] Run: docker exec newest-cni-120615 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:37:46.667666 1535974 oci.go:144] the created container "newest-cni-120615" has a running status.
	I1218 01:37:46.667692 1535974 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa...
	I1218 01:37:46.834539 1535974 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:37:46.861844 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.884882 1535974 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:37:46.884908 1535974 kic_runner.go:114] Args: [docker exec --privileged newest-cni-120615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:37:46.942854 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.960511 1535974 machine.go:94] provisionDockerMachine start ...
	I1218 01:37:46.960612 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:46.978530 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:46.978859 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:46.978868 1535974 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:37:46.979490 1535974 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:37:50.148337 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.148363 1535974 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:37:50.148435 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.165796 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.166115 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.166132 1535974 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:37:50.330955 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.331106 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.348111 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.348435 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.348452 1535974 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:37:50.500688 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:37:50.500716 1535974 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:37:50.500744 1535974 ubuntu.go:190] setting up certificates
	I1218 01:37:50.500754 1535974 provision.go:84] configureAuth start
	I1218 01:37:50.500821 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:50.517589 1535974 provision.go:143] copyHostCerts
	I1218 01:37:50.517666 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:37:50.517680 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:37:50.517755 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:37:50.517871 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:37:50.517882 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:37:50.517912 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:37:50.517969 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:37:50.517977 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:37:50.518002 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:37:50.518054 1535974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:37:50.674888 1535974 provision.go:177] copyRemoteCerts
	I1218 01:37:50.674959 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:37:50.675009 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.693570 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.800638 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:37:50.818412 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:37:50.836171 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:37:50.853859 1535974 provision.go:87] duration metric: took 353.089827ms to configureAuth
	I1218 01:37:50.853884 1535974 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:37:50.854091 1535974 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:50.854099 1535974 machine.go:97] duration metric: took 3.893564907s to provisionDockerMachine
	I1218 01:37:50.854106 1535974 client.go:176] duration metric: took 9.215762234s to LocalClient.Create
	I1218 01:37:50.854131 1535974 start.go:167] duration metric: took 9.215836644s to libmachine.API.Create "newest-cni-120615"
	I1218 01:37:50.854140 1535974 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:37:50.854151 1535974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:37:50.854199 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:37:50.854246 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.871379 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.976751 1535974 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:37:50.979800 1535974 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:37:50.979835 1535974 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:37:50.979846 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:37:50.979919 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:37:50.980017 1535974 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:37:50.980118 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:37:50.987435 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:51.010927 1535974 start.go:296] duration metric: took 156.770961ms for postStartSetup
	I1218 01:37:51.011358 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.028989 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:51.029275 1535974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:37:51.029337 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.046033 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.149901 1535974 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:37:51.154841 1535974 start.go:128] duration metric: took 9.520265624s to createHost
	I1218 01:37:51.154870 1535974 start.go:83] releasing machines lock for "newest-cni-120615", held for 9.520437574s
	I1218 01:37:51.154941 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.172452 1535974 ssh_runner.go:195] Run: cat /version.json
	I1218 01:37:51.172506 1535974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:37:51.172521 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.172564 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.192456 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.195325 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.384735 1535974 ssh_runner.go:195] Run: systemctl --version
	I1218 01:37:51.391571 1535974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:37:51.396317 1535974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:37:51.396387 1535974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:37:51.426976 1535974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:37:51.427002 1535974 start.go:496] detecting cgroup driver to use...
	I1218 01:37:51.427045 1535974 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:37:51.427094 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:37:51.443517 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:37:51.461122 1535974 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:37:51.461182 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:37:51.478844 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:37:51.497057 1535974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:37:51.618030 1535974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:37:51.746908 1535974 docker.go:234] disabling docker service ...
	I1218 01:37:51.747041 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:37:51.768317 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:37:51.781980 1535974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:37:51.904322 1535974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:37:52.052799 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:37:52.066888 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:37:52.082976 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:37:52.093587 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:37:52.102930 1535974 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:37:52.103042 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:37:52.112246 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.121385 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:37:52.130577 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.139689 1535974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:37:52.149904 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:37:52.159110 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:37:52.168101 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:37:52.177205 1535974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:37:52.185241 1535974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:37:52.193080 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.308369 1535974 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:37:52.450163 1535974 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:37:52.450242 1535974 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:37:52.454206 1535974 start.go:564] Will wait 60s for crictl version
	I1218 01:37:52.454330 1535974 ssh_runner.go:195] Run: which crictl
	I1218 01:37:52.457885 1535974 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:37:52.482102 1535974 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:37:52.482223 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.502684 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.526110 1535974 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:37:52.529020 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:52.546624 1535974 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:37:52.550634 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.563708 1535974 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:37:52.566648 1535974 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:37:52.566803 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:52.566895 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.591897 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.591927 1535974 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:37:52.592017 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.621212 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.621242 1535974 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:37:52.621251 1535974 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:37:52.621346 1535974 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:37:52.621421 1535974 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:37:52.651981 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:52.652006 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:52.652029 1535974 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:37:52.652053 1535974 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:37:52.652168 1535974 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:37:52.652238 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:37:52.659908 1535974 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:37:52.660006 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:37:52.667532 1535974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:37:52.680138 1535974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:37:52.693473 1535974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:37:52.706791 1535974 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:37:52.710393 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.719930 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.838696 1535974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:37:52.855521 1535974 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:37:52.855591 1535974 certs.go:195] generating shared ca certs ...
	I1218 01:37:52.855623 1535974 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.855818 1535974 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:37:52.855904 1535974 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:37:52.855930 1535974 certs.go:257] generating profile certs ...
	I1218 01:37:52.856023 1535974 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:37:52.856067 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt with IP's: []
	I1218 01:37:52.959822 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt ...
	I1218 01:37:52.959911 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt: {Name:mk1478bd753bc1bd23e013e8b566fd65e1f2e1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960142 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key ...
	I1218 01:37:52.960182 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key: {Name:mk3ecbc7ec855c1ebb5deefb951affdfc3f90c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960334 1535974 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:37:52.960379 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 01:37:53.073797 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 ...
	I1218 01:37:53.073831 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056: {Name:mkbff084b54b98d69b985b5f1bd631cb072aabd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074057 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 ...
	I1218 01:37:53.074074 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056: {Name:mkb73e5093692957aa43e022ccaed162c1426b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074169 1535974 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt
	I1218 01:37:53.074248 1535974 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key
	I1218 01:37:53.074307 1535974 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:37:53.074329 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt with IP's: []
	I1218 01:37:53.314103 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt ...
	I1218 01:37:53.314136 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt: {Name:mk54950f9214da12e2d9ae5c67b648894886fbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314331 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key ...
	I1218 01:37:53.314345 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key: {Name:mk2d7b01164454a2df40dfec571544f9e3d23770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314570 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:37:53.314621 1535974 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:37:53.314635 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:37:53.314664 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:37:53.314694 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:37:53.314721 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:37:53.314772 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:53.315353 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:37:53.334028 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:37:53.352910 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:37:53.371116 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:37:53.388896 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:37:53.407154 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:37:53.424768 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:37:53.442432 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:37:53.459693 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:37:53.477104 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:37:53.494473 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:37:53.511694 1535974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:37:53.524605 1535974 ssh_runner.go:195] Run: openssl version
	I1218 01:37:53.531162 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.539159 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:37:53.547088 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550792 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550872 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.592275 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.599906 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.607314 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.614880 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:37:53.622354 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626261 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626329 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.673215 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:37:53.682819 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:37:53.692004 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.703568 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:37:53.718183 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726247 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726314 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.769713 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:37:53.777194 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:37:53.784995 1535974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:37:53.788744 1535974 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:37:53.788807 1535974 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:53.788935 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:37:53.788995 1535974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:37:53.815984 1535974 cri.go:89] found id: ""
	I1218 01:37:53.816075 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:37:53.824897 1535974 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:37:53.834778 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:37:53.834915 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:37:53.843777 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:37:53.843797 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:37:53.843886 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:37:53.851665 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:37:53.851766 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:37:53.859225 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:37:53.867081 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:37:53.867187 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:37:53.874504 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.882220 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:37:53.882286 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.889970 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:37:53.897334 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:37:53.897401 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:37:53.904593 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:37:53.944551 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:37:53.944611 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:37:54.027408 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:37:54.027490 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:37:54.027530 1535974 kubeadm.go:319] OS: Linux
	I1218 01:37:54.027581 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:37:54.027632 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:37:54.027693 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:37:54.027752 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:37:54.027803 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:37:54.027862 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:37:54.027912 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:37:54.027964 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:37:54.028012 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:37:54.097877 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:37:54.097993 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:37:54.098097 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:37:54.105071 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:37:54.111500 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:37:54.111603 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:37:54.111672 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:37:54.530590 1535974 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:37:54.977111 1535974 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:37:55.271802 1535974 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:37:55.800100 1535974 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:37:55.973303 1535974 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:37:55.974317 1535974 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.183207 1535974 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:37:56.183548 1535974 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.263322 1535974 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:37:56.663315 1535974 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:37:56.917852 1535974 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:37:56.918300 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:37:57.144859 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:37:57.575780 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:37:57.878713 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:37:58.333388 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:37:58.732682 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:37:58.733416 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:37:58.737417 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:37:58.741102 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:37:58.741209 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:37:58.741290 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:37:58.741882 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:37:58.757974 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:37:58.758530 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:37:58.766133 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:37:58.766550 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:37:58.766761 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:37:58.901026 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:37:58.901158 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:41:58.901889 1535974 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000959518s
	I1218 01:41:58.901915 1535974 kubeadm.go:319] 
	I1218 01:41:58.901973 1535974 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:41:58.902006 1535974 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:41:58.902111 1535974 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:41:58.902115 1535974 kubeadm.go:319] 
	I1218 01:41:58.902219 1535974 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:41:58.902251 1535974 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:41:58.902283 1535974 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:41:58.902287 1535974 kubeadm.go:319] 
	I1218 01:41:58.909121 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:41:58.909533 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:41:58.909635 1535974 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:41:58.909878 1535974 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 01:41:58.909884 1535974 kubeadm.go:319] 
	I1218 01:41:58.909948 1535974 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1218 01:41:58.910051 1535974 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000959518s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000959518s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 01:41:58.910129 1535974 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 01:41:59.328841 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:41:59.342624 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:41:59.342738 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:41:59.351529 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:41:59.351551 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:41:59.351607 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:41:59.359598 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:41:59.359688 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:41:59.367501 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:41:59.375582 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:41:59.375649 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:41:59.383413 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:41:59.391374 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:41:59.391444 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:41:59.399981 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:41:59.407991 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:41:59.408054 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:41:59.415368 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:41:59.457909 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:41:59.458215 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:41:59.537330 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:41:59.537416 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:41:59.537453 1535974 kubeadm.go:319] OS: Linux
	I1218 01:41:59.537500 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:41:59.537551 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:41:59.537599 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:41:59.537649 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:41:59.537698 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:41:59.537753 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:41:59.537800 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:41:59.537850 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:41:59.537895 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:41:59.601143 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:41:59.601259 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:41:59.601369 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:41:59.609176 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:41:59.612708 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:41:59.612866 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:41:59.612946 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:41:59.613032 1535974 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 01:41:59.613110 1535974 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 01:41:59.613200 1535974 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 01:41:59.613293 1535974 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 01:41:59.613424 1535974 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 01:41:59.613519 1535974 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 01:41:59.613611 1535974 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 01:41:59.613738 1535974 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 01:41:59.613808 1535974 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 01:41:59.613893 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:41:59.965901 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:42:00.273593 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:42:00.517614 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:42:00.754315 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:42:00.831013 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:42:00.831849 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:42:00.834692 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:42:00.838062 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:42:00.838173 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:42:00.838258 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:42:00.838866 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:42:00.861421 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:42:00.861532 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:42:00.869206 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:42:00.869621 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:42:00.869690 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:42:01.017070 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:42:01.017185 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:46:01.012416 1535974 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248647s
	I1218 01:46:01.012441 1535974 kubeadm.go:319] 
	I1218 01:46:01.012495 1535974 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:46:01.012527 1535974 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:46:01.012642 1535974 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:46:01.012648 1535974 kubeadm.go:319] 
	I1218 01:46:01.012746 1535974 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:46:01.012776 1535974 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:46:01.012805 1535974 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:46:01.012808 1535974 kubeadm.go:319] 
	I1218 01:46:01.017099 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:46:01.017529 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:46:01.017640 1535974 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:46:01.017873 1535974 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:46:01.017879 1535974 kubeadm.go:319] 
	I1218 01:46:01.017947 1535974 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:46:01.017993 1535974 kubeadm.go:403] duration metric: took 8m7.229192197s to StartCluster
	I1218 01:46:01.018027 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:46:01.018087 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:46:01.042559 1535974 cri.go:89] found id: ""
	I1218 01:46:01.042584 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.042593 1535974 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:46:01.042599 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:46:01.042663 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:46:01.070638 1535974 cri.go:89] found id: ""
	I1218 01:46:01.070661 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.070670 1535974 logs.go:284] No container was found matching "etcd"
	I1218 01:46:01.070675 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:46:01.070733 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:46:01.095625 1535974 cri.go:89] found id: ""
	I1218 01:46:01.095652 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.095661 1535974 logs.go:284] No container was found matching "coredns"
	I1218 01:46:01.095667 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:46:01.095726 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:46:01.123024 1535974 cri.go:89] found id: ""
	I1218 01:46:01.123049 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.123058 1535974 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:46:01.123066 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:46:01.123127 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:46:01.149205 1535974 cri.go:89] found id: ""
	I1218 01:46:01.149273 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.149283 1535974 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:46:01.149291 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:46:01.149370 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:46:01.175919 1535974 cri.go:89] found id: ""
	I1218 01:46:01.175947 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.175957 1535974 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:46:01.175985 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:46:01.176067 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:46:01.203077 1535974 cri.go:89] found id: ""
	I1218 01:46:01.203101 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.203110 1535974 logs.go:284] No container was found matching "kindnet"
	I1218 01:46:01.203121 1535974 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:46:01.203133 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:46:01.267505 1535974 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:46:01.258672    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.259298    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261036    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261647    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.263429    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:46:01.258672    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.259298    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261036    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261647    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.263429    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:46:01.267525 1535974 logs.go:123] Gathering logs for containerd ...
	I1218 01:46:01.267538 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:46:01.305435 1535974 logs.go:123] Gathering logs for container status ...
	I1218 01:46:01.305473 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:46:01.335002 1535974 logs.go:123] Gathering logs for kubelet ...
	I1218 01:46:01.335029 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:46:01.392317 1535974 logs.go:123] Gathering logs for dmesg ...
	I1218 01:46:01.392351 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:46:01.412420 1535974 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:46:01.412472 1535974 out.go:285] * 
	* 
	W1218 01:46:01.412527 1535974 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:46:01.412543 1535974 out.go:285] * 
	* 
	W1218 01:46:01.414976 1535974 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:46:01.421623 1535974 out.go:203] 
	W1218 01:46:01.425533 1535974 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:46:01.425601 1535974 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:46:01.425624 1535974 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:46:01.428730 1535974 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-120615
helpers_test.go:244: (dbg) docker inspect newest-cni-120615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	        "Created": "2025-12-18T01:37:46.267734033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1536406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:37:46.322657241Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hosts",
	        "LogPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1-json.log",
	        "Name": "/newest-cni-120615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-120615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-120615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	                "LowerDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-120615",
	                "Source": "/var/lib/docker/volumes/newest-cni-120615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-120615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-120615",
	                "name.minikube.sigs.k8s.io": "newest-cni-120615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f76f018a6fd20ce57adf8edf73d97febe601a6c68392504c582065a9ed8fc45c",
	            "SandboxKey": "/var/run/docker/netns/f76f018a6fd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-120615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:cc:f5:06:cc:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3561ba231e6c48a625724c6039bb103aabf4482d7db78bad659da0b08d445469",
	                    "EndpointID": "a47896cd0687019046d2563e1820f4df3000f6f6a5fabac9bfc127e2ff82e230",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-120615",
	                        "dd9cd12a762d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615: exit status 6 (338.203113ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:46:01.871300 1548221 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-922343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:33 UTC │
	│ stop    │ -p embed-certs-922343 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:35 UTC │
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	│ stop    │ -p no-preload-970975 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ addons  │ enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ start   │ -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:41:17
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:41:17.364681 1542458 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:41:17.364846 1542458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:41:17.364875 1542458 out.go:374] Setting ErrFile to fd 2...
	I1218 01:41:17.364894 1542458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:41:17.365168 1542458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:41:17.365597 1542458 out.go:368] Setting JSON to false
	I1218 01:41:17.366532 1542458 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30224,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:41:17.366626 1542458 start.go:143] virtualization:  
	I1218 01:41:17.369453 1542458 out.go:179] * [no-preload-970975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:41:17.373146 1542458 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:41:17.373244 1542458 notify.go:221] Checking for updates...
	I1218 01:41:17.378986 1542458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:41:17.381940 1542458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:17.384732 1542458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:41:17.387579 1542458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:41:17.390446 1542458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:41:17.393789 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:17.394396 1542458 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:41:17.426513 1542458 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:41:17.426640 1542458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:41:17.488029 1542458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:41:17.478703453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:41:17.488135 1542458 docker.go:319] overlay module found
	I1218 01:41:17.491211 1542458 out.go:179] * Using the docker driver based on existing profile
	I1218 01:41:17.494107 1542458 start.go:309] selected driver: docker
	I1218 01:41:17.494124 1542458 start.go:927] validating driver "docker" against &{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:17.494227 1542458 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:41:17.494955 1542458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:41:17.562043 1542458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:41:17.552976354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:41:17.562397 1542458 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 01:41:17.562433 1542458 cni.go:84] Creating CNI manager for ""
	I1218 01:41:17.562482 1542458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:41:17.562540 1542458 start.go:353] cluster config:
	{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:17.565742 1542458 out.go:179] * Starting "no-preload-970975" primary control-plane node in "no-preload-970975" cluster
	I1218 01:41:17.568662 1542458 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:41:17.571552 1542458 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:41:17.574233 1542458 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:41:17.574310 1542458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:41:17.574357 1542458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:41:17.574663 1542458 cache.go:107] acquiring lock: {Name:mkbe76c9f71177ead8df5bdae626dba72c24e88a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574752 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1218 01:41:17.574760 1542458 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.281µs
	I1218 01:41:17.574771 1542458 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1218 01:41:17.574783 1542458 cache.go:107] acquiring lock: {Name:mk73deadf102b9ef2729ab344cb753d1e81c8e69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574814 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1218 01:41:17.574818 1542458 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 36.988µs
	I1218 01:41:17.574825 1542458 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574834 1542458 cache.go:107] acquiring lock: {Name:mk08959f4f9aec2f8cb7736193533393f169491b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574861 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1218 01:41:17.574866 1542458 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32.787µs
	I1218 01:41:17.574871 1542458 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574881 1542458 cache.go:107] acquiring lock: {Name:mk51756ddbebcd3ad705096b7bac91c4370ab67f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574908 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1218 01:41:17.574913 1542458 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 32.615µs
	I1218 01:41:17.574918 1542458 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574927 1542458 cache.go:107] acquiring lock: {Name:mkf6c55bc605708b579c41afc97203c8d4e81ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574954 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1218 01:41:17.574958 1542458 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 32.934µs
	I1218 01:41:17.574964 1542458 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574972 1542458 cache.go:107] acquiring lock: {Name:mk1ebccb0216e63c057736909b9d1bea2501f43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575000 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1218 01:41:17.575005 1542458 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 34.018µs
	I1218 01:41:17.575011 1542458 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1218 01:41:17.575028 1542458 cache.go:107] acquiring lock: {Name:mk273a40d27e5765473ae1c9ccf1347edbca61c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575052 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1218 01:41:17.575056 1542458 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 29.734µs
	I1218 01:41:17.575061 1542458 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1218 01:41:17.575071 1542458 cache.go:107] acquiring lock: {Name:mkb0d564e902314f0008f6dd25799cc8c98892bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575096 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1218 01:41:17.575101 1542458 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.319µs
	I1218 01:41:17.575107 1542458 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1218 01:41:17.575113 1542458 cache.go:87] Successfully saved all images to host disk.
	I1218 01:41:17.593931 1542458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:41:17.593955 1542458 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:41:17.593976 1542458 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:41:17.594007 1542458 start.go:360] acquireMachinesLock for no-preload-970975: {Name:mkc5466bd6e57a370f52d05d09914f47211c2efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.594062 1542458 start.go:364] duration metric: took 35.782µs to acquireMachinesLock for "no-preload-970975"
	I1218 01:41:17.594089 1542458 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:41:17.594095 1542458 fix.go:54] fixHost starting: 
	I1218 01:41:17.594362 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:17.612849 1542458 fix.go:112] recreateIfNeeded on no-preload-970975: state=Stopped err=<nil>
	W1218 01:41:17.612890 1542458 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:41:17.616118 1542458 out.go:252] * Restarting existing docker container for "no-preload-970975" ...
	I1218 01:41:17.616203 1542458 cli_runner.go:164] Run: docker start no-preload-970975
	I1218 01:41:17.884856 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:17.905905 1542458 kic.go:430] container "no-preload-970975" state is running.
	I1218 01:41:17.906316 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:17.937083 1542458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:41:17.937308 1542458 machine.go:94] provisionDockerMachine start ...
	I1218 01:41:17.937366 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:17.956149 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:17.956499 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:17.956517 1542458 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:41:17.957070 1542458 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55092->127.0.0.1:34212: read: connection reset by peer
	I1218 01:41:21.112268 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:41:21.112295 1542458 ubuntu.go:182] provisioning hostname "no-preload-970975"
	I1218 01:41:21.112359 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.130603 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:21.130920 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:21.130938 1542458 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-970975 && echo "no-preload-970975" | sudo tee /etc/hostname
	I1218 01:41:21.297556 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:41:21.297646 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.320590 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:21.320958 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:21.320986 1542458 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970975/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:41:21.476955 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:41:21.476981 1542458 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:41:21.477006 1542458 ubuntu.go:190] setting up certificates
	I1218 01:41:21.477017 1542458 provision.go:84] configureAuth start
	I1218 01:41:21.477082 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:21.494228 1542458 provision.go:143] copyHostCerts
	I1218 01:41:21.494310 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:41:21.494324 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:41:21.494401 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:41:21.494522 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:41:21.494533 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:41:21.494569 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:41:21.494641 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:41:21.494660 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:41:21.494691 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:41:21.494755 1542458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.no-preload-970975 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970975]
	I1218 01:41:21.673721 1542458 provision.go:177] copyRemoteCerts
	I1218 01:41:21.673787 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:41:21.673828 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.691241 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:21.796420 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:41:21.814210 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:41:21.832654 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:41:21.850820 1542458 provision.go:87] duration metric: took 373.776889ms to configureAuth
	I1218 01:41:21.850846 1542458 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:41:21.851039 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:21.851046 1542458 machine.go:97] duration metric: took 3.913731319s to provisionDockerMachine
	I1218 01:41:21.851053 1542458 start.go:293] postStartSetup for "no-preload-970975" (driver="docker")
	I1218 01:41:21.851066 1542458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:41:21.851125 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:41:21.851174 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.867950 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:21.976450 1542458 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:41:21.979834 1542458 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:41:21.979870 1542458 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:41:21.979882 1542458 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:41:21.979967 1542458 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:41:21.980082 1542458 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:41:21.980195 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:41:21.987678 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:41:22.007779 1542458 start.go:296] duration metric: took 156.709262ms for postStartSetup
	I1218 01:41:22.007867 1542458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:41:22.007919 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.027575 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.133734 1542458 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:41:22.138514 1542458 fix.go:56] duration metric: took 4.544410806s for fixHost
	I1218 01:41:22.138549 1542458 start.go:83] releasing machines lock for "no-preload-970975", held for 4.544464704s
	I1218 01:41:22.138644 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:22.157798 1542458 ssh_runner.go:195] Run: cat /version.json
	I1218 01:41:22.157854 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.158122 1542458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:41:22.158189 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.181525 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.198466 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.397543 1542458 ssh_runner.go:195] Run: systemctl --version
	I1218 01:41:22.404123 1542458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:41:22.408396 1542458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:41:22.408478 1542458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:41:22.416316 1542458 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:41:22.416385 1542458 start.go:496] detecting cgroup driver to use...
	I1218 01:41:22.416431 1542458 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:41:22.416498 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:41:22.433783 1542458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:41:22.447542 1542458 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:41:22.447641 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:41:22.463765 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:41:22.477008 1542458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:41:22.587523 1542458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:41:22.731488 1542458 docker.go:234] disabling docker service ...
	I1218 01:41:22.731561 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:41:22.747388 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:41:22.761578 1542458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:41:22.877887 1542458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:41:23.031065 1542458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:41:23.045226 1542458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:41:23.061762 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:41:23.072968 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:41:23.082631 1542458 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:41:23.082726 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:41:23.091532 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:41:23.101058 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:41:23.110071 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:41:23.119106 1542458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:41:23.127834 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:41:23.137037 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:41:23.145854 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:41:23.155263 1542458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:41:23.162940 1542458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:41:23.170628 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:23.282537 1542458 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:41:23.387115 1542458 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:41:23.387237 1542458 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:41:23.391563 1542458 start.go:564] Will wait 60s for crictl version
	I1218 01:41:23.391643 1542458 ssh_runner.go:195] Run: which crictl
	I1218 01:41:23.395601 1542458 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:41:23.420820 1542458 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:41:23.420915 1542458 ssh_runner.go:195] Run: containerd --version
	I1218 01:41:23.441612 1542458 ssh_runner.go:195] Run: containerd --version
	I1218 01:41:23.470931 1542458 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:41:23.474060 1542458 cli_runner.go:164] Run: docker network inspect no-preload-970975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:41:23.491578 1542458 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1218 01:41:23.495808 1542458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:41:23.506072 1542458 kubeadm.go:884] updating cluster {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:41:23.506187 1542458 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:41:23.506254 1542458 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:41:23.531180 1542458 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:41:23.531204 1542458 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:41:23.531212 1542458 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:41:23.531314 1542458 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-970975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:41:23.531379 1542458 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:41:23.556615 1542458 cni.go:84] Creating CNI manager for ""
	I1218 01:41:23.556686 1542458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:41:23.556708 1542458 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:41:23.556730 1542458 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970975 NodeName:no-preload-970975 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:41:23.556849 1542458 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-970975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:41:23.556928 1542458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:41:23.564934 1542458 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:41:23.565015 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:41:23.572862 1542458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:41:23.585997 1542458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:41:23.599495 1542458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 01:41:23.614253 1542458 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:41:23.617922 1542458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:41:23.627614 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:23.769940 1542458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:41:23.786080 1542458 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975 for IP: 192.168.76.2
	I1218 01:41:23.786157 1542458 certs.go:195] generating shared ca certs ...
	I1218 01:41:23.786187 1542458 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:23.786374 1542458 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:41:23.786452 1542458 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:41:23.786479 1542458 certs.go:257] generating profile certs ...
	I1218 01:41:23.786915 1542458 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.key
	I1218 01:41:23.787042 1542458 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb
	I1218 01:41:23.787216 1542458 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key
	I1218 01:41:23.787372 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:41:23.787441 1542458 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:41:23.787473 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:41:23.787542 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:41:23.787589 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:41:23.787640 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:41:23.787726 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:41:23.788890 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:41:23.817320 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:41:23.835171 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:41:23.854360 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:41:23.874274 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:41:23.891844 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 01:41:23.909145 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:41:23.927246 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 01:41:23.945240 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:41:23.963173 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:41:23.980488 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:41:23.998141 1542458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:41:24.014660 1542458 ssh_runner.go:195] Run: openssl version
	I1218 01:41:24.021666 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.029705 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:41:24.037493 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.041469 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.041581 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.085117 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:41:24.092891 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.100861 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:41:24.108550 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.112664 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.112735 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.153886 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:41:24.161696 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.169404 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:41:24.177530 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.181402 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.181471 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.222746 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:41:24.230660 1542458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:41:24.234767 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:41:24.276020 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:41:24.322161 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:41:24.363215 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:41:24.405810 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:41:24.447504 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:41:24.489540 1542458 kubeadm.go:401] StartCluster: {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:24.489634 1542458 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:41:24.489710 1542458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:41:24.515412 1542458 cri.go:89] found id: ""
	I1218 01:41:24.515486 1542458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:41:24.523200 1542458 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:41:24.523218 1542458 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:41:24.523266 1542458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:41:24.530588 1542458 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:41:24.531015 1542458 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:24.531121 1542458 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-970975" cluster setting kubeconfig missing "no-preload-970975" context setting]
	I1218 01:41:24.531398 1542458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.532672 1542458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:41:24.540238 1542458 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1218 01:41:24.540316 1542458 kubeadm.go:602] duration metric: took 17.091472ms to restartPrimaryControlPlane
	I1218 01:41:24.540342 1542458 kubeadm.go:403] duration metric: took 50.814694ms to StartCluster
	I1218 01:41:24.540377 1542458 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.540439 1542458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:24.541093 1542458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.541305 1542458 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:41:24.541607 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:24.541651 1542458 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:41:24.541714 1542458 addons.go:70] Setting storage-provisioner=true in profile "no-preload-970975"
	I1218 01:41:24.541728 1542458 addons.go:239] Setting addon storage-provisioner=true in "no-preload-970975"
	I1218 01:41:24.541756 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.541767 1542458 addons.go:70] Setting dashboard=true in profile "no-preload-970975"
	I1218 01:41:24.541785 1542458 addons.go:239] Setting addon dashboard=true in "no-preload-970975"
	W1218 01:41:24.541792 1542458 addons.go:248] addon dashboard should already be in state true
	I1218 01:41:24.541815 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.542236 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.542251 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.545008 1542458 addons.go:70] Setting default-storageclass=true in profile "no-preload-970975"
	I1218 01:41:24.545648 1542458 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-970975"
	I1218 01:41:24.545997 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.546822 1542458 out.go:179] * Verifying Kubernetes components...
	I1218 01:41:24.552927 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:24.570156 1542458 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:41:24.573081 1542458 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:41:24.573110 1542458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:41:24.573184 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.592695 1542458 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:41:24.595365 1542458 addons.go:239] Setting addon default-storageclass=true in "no-preload-970975"
	I1218 01:41:24.595416 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.595944 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.600301 1542458 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:41:24.603288 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:41:24.603315 1542458 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:41:24.603380 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.629343 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.636778 1542458 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:24.636799 1542458 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:41:24.636864 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.658544 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.669350 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.789107 1542458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:41:24.835097 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:41:24.837668 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:41:24.837689 1542458 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:41:24.853236 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:41:24.853264 1542458 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:41:24.869445 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:24.897171 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:41:24.897197 1542458 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:41:24.938270 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:41:24.938297 1542458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:41:24.951622 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:41:24.951648 1542458 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:41:24.971216 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:41:24.971238 1542458 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:41:24.983819 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:41:24.983893 1542458 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:41:24.996816 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:41:24.996840 1542458 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:41:25.012660 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.012686 1542458 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:41:25.026609 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.394540 1542458 node_ready.go:35] waiting up to 6m0s for node "no-preload-970975" to be "Ready" ...
	W1218 01:41:25.394678 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395049 1542458 retry.go:31] will retry after 363.399962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.394729 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395067 1542458 retry.go:31] will retry after 247.961433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.394925 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395078 1542458 retry.go:31] will retry after 212.437007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.607792 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.643330 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:25.674866 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.674902 1542458 retry.go:31] will retry after 498.891168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.712162 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.712205 1542458 retry.go:31] will retry after 317.248393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.759542 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:25.819152 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.819190 1542458 retry.go:31] will retry after 494.070005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.030108 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:26.090657 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.090742 1542458 retry.go:31] will retry after 817.005428ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.174839 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:26.239145 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.239185 1542458 retry.go:31] will retry after 583.254902ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.314301 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:26.372805 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.372838 1542458 retry.go:31] will retry after 589.170119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.823020 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:26.882718 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.882755 1542458 retry.go:31] will retry after 886.612609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.908327 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:26.962817 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:26.979923 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.980023 1542458 retry.go:31] will retry after 562.729969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:27.024197 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.024231 1542458 retry.go:31] will retry after 1.217970865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:27.396236 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:27.543722 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:27.600982 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.601023 1542458 retry.go:31] will retry after 819.101552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.770394 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:27.830382 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.830419 1542458 retry.go:31] will retry after 1.67120434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.242456 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:28.302274 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.302318 1542458 retry.go:31] will retry after 1.635298762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.421000 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:28.487186 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.487222 1542458 retry.go:31] will retry after 1.446238744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:29.502431 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:29.561749 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:29.561785 1542458 retry.go:31] will retry after 2.842084958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:29.896301 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:29.934589 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:29.937978 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:30.014905 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:30.014994 1542458 retry.go:31] will retry after 3.020151942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:30.026594 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:30.026691 1542458 retry.go:31] will retry after 2.597509716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:32.395523 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:32.404827 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:32.465405 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.465451 1542458 retry.go:31] will retry after 2.786267996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.624505 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:32.701764 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.701805 1542458 retry.go:31] will retry after 1.750635941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:33.035842 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:33.099433 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:33.099469 1542458 retry.go:31] will retry after 2.666365739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:34.396276 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:34.452614 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:34.514417 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:34.514448 1542458 retry.go:31] will retry after 5.613247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.252571 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:35.317373 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.317406 1542458 retry.go:31] will retry after 2.675384889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.766334 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:35.831157 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.831192 1542458 retry.go:31] will retry after 7.35423349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:36.896400 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:37.993761 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:38.061649 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:38.061688 1542458 retry.go:31] will retry after 8.134260422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:39.396290 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:40.128917 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:40.209091 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:40.209125 1542458 retry.go:31] will retry after 4.385779308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:41.895504 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:43.185642 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:43.250764 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:43.250796 1542458 retry.go:31] will retry after 6.231358659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:44.395420 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:44.595764 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:44.664344 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:44.664380 1542458 retry.go:31] will retry after 11.847560445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:46.196558 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:46.269491 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:46.269526 1542458 retry.go:31] will retry after 5.581587619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:46.396021 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:41:48.895451 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:49.482739 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:49.541749 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:49.541784 1542458 retry.go:31] will retry after 8.073539424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:51.396344 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:51.852115 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:51.915137 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:51.915172 1542458 retry.go:31] will retry after 10.294162413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:53.896157 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:41:56.395497 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:56.512767 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:56.572427 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:56.572461 1542458 retry.go:31] will retry after 11.314950955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:58.901889 1535974 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000959518s
	I1218 01:41:58.901915 1535974 kubeadm.go:319] 
	I1218 01:41:58.901973 1535974 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:41:58.902006 1535974 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:41:58.902111 1535974 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:41:58.902115 1535974 kubeadm.go:319] 
	I1218 01:41:58.902219 1535974 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:41:58.902251 1535974 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:41:58.902283 1535974 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:41:58.902287 1535974 kubeadm.go:319] 
	I1218 01:41:58.909121 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:41:58.909533 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:41:58.909635 1535974 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:41:58.909878 1535974 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 01:41:58.909884 1535974 kubeadm.go:319] 
	I1218 01:41:58.909948 1535974 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1218 01:41:58.910051 1535974 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000959518s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 01:41:58.910129 1535974 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 01:41:59.328841 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:41:59.342624 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:41:59.342738 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:41:59.351529 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:41:59.351551 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:41:59.351607 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:41:59.359598 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:41:59.359688 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:41:59.367501 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:41:59.375582 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:41:59.375649 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:41:59.383413 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:41:59.391374 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:41:59.391444 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:41:59.399981 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:41:59.407991 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:41:59.408054 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:41:59.415368 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:41:59.457909 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:41:59.458215 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:41:59.537330 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:41:59.537416 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:41:59.537453 1535974 kubeadm.go:319] OS: Linux
	I1218 01:41:59.537500 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:41:59.537551 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:41:59.537599 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:41:59.537649 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:41:59.537698 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:41:59.537753 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:41:59.537800 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:41:59.537850 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:41:59.537895 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:41:59.601143 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:41:59.601259 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:41:59.601369 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:41:59.609176 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:41:59.612708 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:41:59.612866 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:41:59.612946 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:41:59.613032 1535974 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 01:41:59.613110 1535974 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 01:41:59.613200 1535974 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 01:41:59.613293 1535974 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 01:41:59.613424 1535974 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 01:41:59.613519 1535974 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 01:41:59.613611 1535974 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 01:41:59.613738 1535974 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 01:41:59.613808 1535974 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 01:41:59.613893 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:41:59.965901 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:42:00.273593 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:42:00.517614 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:42:00.754315 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:42:00.831013 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:42:00.831849 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:42:00.834692 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:42:00.838062 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:42:00.838173 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:42:00.838258 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:42:00.838866 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:42:00.861421 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:42:00.861532 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:42:00.869206 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:42:00.869621 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:42:00.869690 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:42:01.017070 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:42:01.017185 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:41:57.615630 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:57.686813 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:57.686850 1542458 retry.go:31] will retry after 29.037122126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:58.395549 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:00.396394 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:02.209588 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:02.278784 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:02.278825 1542458 retry.go:31] will retry after 17.888279069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:02.895652 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:04.896306 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:07.396143 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:07.887683 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:07.967763 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:07.967796 1542458 retry.go:31] will retry after 14.642872465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:09.896073 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:12.396260 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:14.896042 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:16.896286 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:18.896459 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:20.168054 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:20.246791 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:20.246828 1542458 retry.go:31] will retry after 16.712663498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:21.395990 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:22.611852 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:22.673406 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:22.673445 1542458 retry.go:31] will retry after 21.192666201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:23.396132 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:25.895988 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:26.724599 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:42:26.782878 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:26.782912 1542458 retry.go:31] will retry after 21.608216211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:28.395363 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:30.396311 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:32.896262 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:35.395421 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:36.959868 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:37.028262 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:37.028401 1542458 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:42:37.396113 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:39.396234 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:41.396309 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:43.866395 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:43.896089 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:43.945124 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:43.945220 1542458 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:42:45.896258 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:48.392255 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:42:48.396036 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:48.465313 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:48.465411 1542458 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:42:48.469113 1542458 out.go:179] * Enabled addons: 
	I1218 01:42:48.471856 1542458 addons.go:530] duration metric: took 1m23.930193958s for enable addons: enabled=[]
	W1218 01:42:50.396402 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:52.896362 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:55.396228 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:57.896142 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:00.396105 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:02.896374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:05.396267 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:07.896366 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:10.396369 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:12.896401 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:15.396146 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:17.396361 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:19.896362 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:22.396171 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:24.895542 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:27.395379 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:29.396071 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:31.895472 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:34.395939 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:36.396095 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:38.396414 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:40.896036 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:43.395479 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:45.896432 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:48.396351 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:50.896295 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:53.396396 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:55.896168 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:58.396230 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:00.405834 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:02.896166 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:04.896371 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:06.896416 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:09.396303 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:11.896341 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:14.395475 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:16.896423 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:19.396185 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:21.396245 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:23.896170 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:26.396177 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:28.896337 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:31.396072 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:33.396254 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:35.396495 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:37.896137 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:39.896262 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:42.396086 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:44.895371 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:46.896074 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:48.896364 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:51.396336 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:53.895498 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:56.396404 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:58.896175 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:00.896234 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:03.395378 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:05.396146 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:07.396374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:09.396504 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:11.896374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:14.396063 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:16.396329 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:18.896121 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:20.896314 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:23.395969 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:25.895511 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:28.396315 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:30.896158 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:32.896205 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:35.396218 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:37.896429 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:40.395870 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:42.896279 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:45.396405 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:47.896110 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:49.896393 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:52.396300 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:54.895921 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:56.895990 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:46:01.012416 1535974 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248647s
	I1218 01:46:01.012441 1535974 kubeadm.go:319] 
	I1218 01:46:01.012495 1535974 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:46:01.012527 1535974 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:46:01.012642 1535974 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:46:01.012648 1535974 kubeadm.go:319] 
	I1218 01:46:01.012746 1535974 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:46:01.012776 1535974 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:46:01.012805 1535974 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:46:01.012808 1535974 kubeadm.go:319] 
	I1218 01:46:01.017099 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:46:01.017529 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:46:01.017640 1535974 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:46:01.017873 1535974 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:46:01.017879 1535974 kubeadm.go:319] 
	I1218 01:46:01.017947 1535974 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:46:01.017993 1535974 kubeadm.go:403] duration metric: took 8m7.229192197s to StartCluster
	I1218 01:46:01.018027 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:46:01.018087 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:46:01.042559 1535974 cri.go:89] found id: ""
	I1218 01:46:01.042584 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.042593 1535974 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:46:01.042599 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:46:01.042663 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:46:01.070638 1535974 cri.go:89] found id: ""
	I1218 01:46:01.070661 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.070670 1535974 logs.go:284] No container was found matching "etcd"
	I1218 01:46:01.070675 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:46:01.070733 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:46:01.095625 1535974 cri.go:89] found id: ""
	I1218 01:46:01.095652 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.095661 1535974 logs.go:284] No container was found matching "coredns"
	I1218 01:46:01.095667 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:46:01.095726 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:46:01.123024 1535974 cri.go:89] found id: ""
	I1218 01:46:01.123049 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.123058 1535974 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:46:01.123066 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:46:01.123127 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:46:01.149205 1535974 cri.go:89] found id: ""
	I1218 01:46:01.149273 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.149283 1535974 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:46:01.149291 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:46:01.149370 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:46:01.175919 1535974 cri.go:89] found id: ""
	I1218 01:46:01.175947 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.175957 1535974 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:46:01.175985 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:46:01.176067 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:46:01.203077 1535974 cri.go:89] found id: ""
	I1218 01:46:01.203101 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.203110 1535974 logs.go:284] No container was found matching "kindnet"
	I1218 01:46:01.203121 1535974 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:46:01.203133 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:46:01.267505 1535974 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:46:01.258672    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.259298    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261036    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261647    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.263429    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:46:01.258672    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.259298    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261036    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261647    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.263429    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:46:01.267525 1535974 logs.go:123] Gathering logs for containerd ...
	I1218 01:46:01.267538 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:46:01.305435 1535974 logs.go:123] Gathering logs for container status ...
	I1218 01:46:01.305473 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:46:01.335002 1535974 logs.go:123] Gathering logs for kubelet ...
	I1218 01:46:01.335029 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:46:01.392317 1535974 logs.go:123] Gathering logs for dmesg ...
	I1218 01:46:01.392351 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:46:01.412420 1535974 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:46:01.412472 1535974 out.go:285] * 
	W1218 01:46:01.412527 1535974 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:46:01.412543 1535974 out.go:285] * 
	W1218 01:46:01.414976 1535974 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:46:01.421623 1535974 out.go:203] 
	W1218 01:46:01.425533 1535974 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:46:01.425601 1535974 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:46:01.425624 1535974 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:46:01.428730 1535974 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393026324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393099233Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393195370Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393270003Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393341599Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393405606Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393477645Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393542177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393629223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393734886Z" level=info msg="Connect containerd service"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.394100211Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.394756958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.408838556Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.408942858Z" level=info msg="Start recovering state"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.408840895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.409318529Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448011988Z" level=info msg="Start event monitor"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448063680Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448073148Z" level=info msg="Start streaming server"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448083273Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448092323Z" level=info msg="runtime interface starting up..."
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448098198Z" level=info msg="starting plugins..."
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448110538Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448240225Z" level=info msg="containerd successfully booted in 0.081802s"
	Dec 18 01:37:52 newest-cni-120615 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:46:02.566047    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:02.566953    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:02.568580    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:02.569145    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:02.571160    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:46:02 up  8:28,  0 user,  load average: 0.41, 0.87, 1.57
	Linux newest-cni-120615 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:45:59 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:45:59 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 18 01:45:59 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:45:59 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:45:59 newest-cni-120615 kubelet[4731]: E1218 01:45:59.939808    4731 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:45:59 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:45:59 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:46:00 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 18 01:46:00 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:46:00 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:46:00 newest-cni-120615 kubelet[4737]: E1218 01:46:00.693194    4737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:46:00 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:46:00 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:46:01 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 18 01:46:01 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:46:01 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:46:01 newest-cni-120615 kubelet[4817]: E1218 01:46:01.481309    4817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:46:01 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:46:01 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:46:02 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 01:46:02 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:46:02 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:46:02 newest-cni-120615 kubelet[4844]: E1218 01:46:02.198550    4844 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:46:02 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:46:02 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 6 (384.240706ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:46:03.141427 1548445 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-120615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (501.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-970975 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-970975 create -f testdata/busybox.yaml: exit status 1 (57.72915ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-970975" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-970975 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1511022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:31:17.16290886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1e9bc76dbd04c46d3398cadb3276424663a2b675616e94f670f35547ef4442d",
	            "SandboxKey": "/var/run/docker/netns/e1e9bc76dbd0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:4c:f1:db:47:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "a42f74c81af72816a5096acec3153b345a82e549e666df17a9cd4661c0bfa55d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 6 (302.845036ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:39:47.427521 1539904 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ unpause │ -p old-k8s-version-207212 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-922343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:33 UTC │
	│ stop    │ -p embed-certs-922343 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:35 UTC │
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:37:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:37:41.409265 1535974 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:37:41.409621 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409656 1535974 out.go:374] Setting ErrFile to fd 2...
	I1218 01:37:41.409674 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409955 1535974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:37:41.410413 1535974 out.go:368] Setting JSON to false
	I1218 01:37:41.411299 1535974 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30008,"bootTime":1765991854,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:37:41.411395 1535974 start.go:143] virtualization:  
	I1218 01:37:41.415580 1535974 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:37:41.419867 1535974 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:37:41.419945 1535974 notify.go:221] Checking for updates...
	I1218 01:37:41.426287 1535974 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:37:41.429432 1535974 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:37:41.433605 1535974 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:37:41.436760 1535974 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:37:41.439743 1535974 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:37:41.443485 1535974 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:41.443626 1535974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:37:41.476508 1535974 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:37:41.476682 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.529692 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.519945478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.529801 1535974 docker.go:319] overlay module found
	I1218 01:37:41.533160 1535974 out.go:179] * Using the docker driver based on user configuration
	I1218 01:37:41.536049 1535974 start.go:309] selected driver: docker
	I1218 01:37:41.536071 1535974 start.go:927] validating driver "docker" against <nil>
	I1218 01:37:41.536087 1535974 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:37:41.536903 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.594960 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.586076136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.595118 1535974 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1218 01:37:41.595153 1535974 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1218 01:37:41.595385 1535974 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:37:41.598327 1535974 out.go:179] * Using Docker driver with root privileges
	I1218 01:37:41.601257 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:41.601333 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:41.601345 1535974 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:37:41.601426 1535974 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:41.606414 1535974 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:37:41.609305 1535974 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:37:41.612198 1535974 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:37:41.615045 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:41.615091 1535974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:37:41.615104 1535974 cache.go:65] Caching tarball of preloaded images
	I1218 01:37:41.615136 1535974 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:37:41.615184 1535974 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:37:41.615194 1535974 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:37:41.615294 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:41.615311 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json: {Name:mk1c21bf1c938626eee4c23c85b81bbb6255d680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:41.634234 1535974 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:37:41.634258 1535974 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:37:41.634273 1535974 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:37:41.634304 1535974 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:37:41.634418 1535974 start.go:364] duration metric: took 93.52µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:37:41.634450 1535974 start.go:93] Provisioning new machine with config: &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:37:41.634560 1535974 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:37:41.638056 1535974 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:37:41.638295 1535974 start.go:159] libmachine.API.Create for "newest-cni-120615" (driver="docker")
	I1218 01:37:41.638333 1535974 client.go:173] LocalClient.Create starting
	I1218 01:37:41.638412 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:37:41.638450 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638466 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638528 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:37:41.638549 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638564 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638936 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:37:41.659766 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:37:41.659848 1535974 network_create.go:284] running [docker network inspect newest-cni-120615] to gather additional debugging logs...
	I1218 01:37:41.659883 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615
	W1218 01:37:41.680710 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 returned with exit code 1
	I1218 01:37:41.680751 1535974 network_create.go:287] error running [docker network inspect newest-cni-120615]: docker network inspect newest-cni-120615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-120615 not found
	I1218 01:37:41.680768 1535974 network_create.go:289] output of [docker network inspect newest-cni-120615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-120615 not found
	
	** /stderr **
	I1218 01:37:41.680867 1535974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:41.697958 1535974 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:37:41.698338 1535974 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:37:41.698559 1535974 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:37:41.698831 1535974 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 01:37:41.699243 1535974 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001983860}
	I1218 01:37:41.699261 1535974 network_create.go:124] attempt to create docker network newest-cni-120615 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 01:37:41.699323 1535974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-120615 newest-cni-120615
	I1218 01:37:41.764110 1535974 network_create.go:108] docker network newest-cni-120615 192.168.85.0/24 created
	I1218 01:37:41.764138 1535974 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-120615" container
	I1218 01:37:41.764211 1535974 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:37:41.780305 1535974 cli_runner.go:164] Run: docker volume create newest-cni-120615 --label name.minikube.sigs.k8s.io=newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:37:41.798478 1535974 oci.go:103] Successfully created a docker volume newest-cni-120615
	I1218 01:37:41.798584 1535974 cli_runner.go:164] Run: docker run --rm --name newest-cni-120615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --entrypoint /usr/bin/test -v newest-cni-120615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:37:42.380541 1535974 oci.go:107] Successfully prepared a docker volume newest-cni-120615
	I1218 01:37:42.380617 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:42.380663 1535974 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 01:37:42.380737 1535974 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 01:37:46.199794 1535974 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.819017615s)
	I1218 01:37:46.199835 1535974 kic.go:203] duration metric: took 3.819169809s to extract preloaded images to volume ...
	W1218 01:37:46.199963 1535974 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:37:46.200068 1535974 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:37:46.253384 1535974 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-120615 --name newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-120615 --network newest-cni-120615 --ip 192.168.85.2 --volume newest-cni-120615:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:37:46.551881 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Running}}
	I1218 01:37:46.583903 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.608169 1535974 cli_runner.go:164] Run: docker exec newest-cni-120615 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:37:46.667666 1535974 oci.go:144] the created container "newest-cni-120615" has a running status.
	I1218 01:37:46.667692 1535974 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa...
	I1218 01:37:46.834539 1535974 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:37:46.861844 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.884882 1535974 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:37:46.884908 1535974 kic_runner.go:114] Args: [docker exec --privileged newest-cni-120615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:37:46.942854 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.960511 1535974 machine.go:94] provisionDockerMachine start ...
	I1218 01:37:46.960612 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:46.978530 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:46.978859 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:46.978868 1535974 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:37:46.979490 1535974 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:37:50.148337 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.148363 1535974 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:37:50.148435 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.165796 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.166115 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.166132 1535974 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:37:50.330955 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.331106 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.348111 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.348435 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.348452 1535974 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:37:50.500688 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:37:50.500716 1535974 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:37:50.500744 1535974 ubuntu.go:190] setting up certificates
	I1218 01:37:50.500754 1535974 provision.go:84] configureAuth start
	I1218 01:37:50.500821 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:50.517589 1535974 provision.go:143] copyHostCerts
	I1218 01:37:50.517666 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:37:50.517680 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:37:50.517755 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:37:50.517871 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:37:50.517882 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:37:50.517912 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:37:50.517969 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:37:50.517977 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:37:50.518002 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:37:50.518054 1535974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:37:50.674888 1535974 provision.go:177] copyRemoteCerts
	I1218 01:37:50.674959 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:37:50.675009 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.693570 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.800638 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:37:50.818412 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:37:50.836171 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:37:50.853859 1535974 provision.go:87] duration metric: took 353.089827ms to configureAuth
	I1218 01:37:50.853884 1535974 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:37:50.854091 1535974 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:50.854099 1535974 machine.go:97] duration metric: took 3.893564907s to provisionDockerMachine
	I1218 01:37:50.854106 1535974 client.go:176] duration metric: took 9.215762234s to LocalClient.Create
	I1218 01:37:50.854131 1535974 start.go:167] duration metric: took 9.215836644s to libmachine.API.Create "newest-cni-120615"
	I1218 01:37:50.854140 1535974 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:37:50.854151 1535974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:37:50.854199 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:37:50.854246 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.871379 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.976751 1535974 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:37:50.979800 1535974 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:37:50.979835 1535974 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:37:50.979846 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:37:50.979919 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:37:50.980017 1535974 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:37:50.980118 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:37:50.987435 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:51.010927 1535974 start.go:296] duration metric: took 156.770961ms for postStartSetup
	I1218 01:37:51.011358 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.028989 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:51.029275 1535974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:37:51.029337 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.046033 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.149901 1535974 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:37:51.154841 1535974 start.go:128] duration metric: took 9.520265624s to createHost
	I1218 01:37:51.154870 1535974 start.go:83] releasing machines lock for "newest-cni-120615", held for 9.520437574s
	I1218 01:37:51.154941 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.172452 1535974 ssh_runner.go:195] Run: cat /version.json
	I1218 01:37:51.172506 1535974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:37:51.172521 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.172564 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.192456 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.195325 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.384735 1535974 ssh_runner.go:195] Run: systemctl --version
	I1218 01:37:51.391571 1535974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:37:51.396317 1535974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:37:51.396387 1535974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:37:51.426976 1535974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:37:51.427002 1535974 start.go:496] detecting cgroup driver to use...
	I1218 01:37:51.427045 1535974 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:37:51.427094 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:37:51.443517 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:37:51.461122 1535974 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:37:51.461182 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:37:51.478844 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:37:51.497057 1535974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:37:51.618030 1535974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:37:51.746908 1535974 docker.go:234] disabling docker service ...
	I1218 01:37:51.747041 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:37:51.768317 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:37:51.781980 1535974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:37:51.904322 1535974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:37:52.052799 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:37:52.066888 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:37:52.082976 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:37:52.093587 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:37:52.102930 1535974 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:37:52.103042 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:37:52.112246 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.121385 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:37:52.130577 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.139689 1535974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:37:52.149904 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:37:52.159110 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:37:52.168101 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:37:52.177205 1535974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:37:52.185241 1535974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:37:52.193080 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.308369 1535974 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:37:52.450163 1535974 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:37:52.450242 1535974 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:37:52.454206 1535974 start.go:564] Will wait 60s for crictl version
	I1218 01:37:52.454330 1535974 ssh_runner.go:195] Run: which crictl
	I1218 01:37:52.457885 1535974 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:37:52.482102 1535974 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:37:52.482223 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.502684 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.526110 1535974 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:37:52.529020 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:52.546624 1535974 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:37:52.550634 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.563708 1535974 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:37:52.566648 1535974 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:37:52.566803 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:52.566895 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.591897 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.591927 1535974 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:37:52.592017 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.621212 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.621242 1535974 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:37:52.621251 1535974 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:37:52.621346 1535974 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:37:52.621421 1535974 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:37:52.651981 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:52.652006 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:52.652029 1535974 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:37:52.652053 1535974 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:37:52.652168 1535974 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:37:52.652238 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:37:52.659908 1535974 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:37:52.660006 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:37:52.667532 1535974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:37:52.680138 1535974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:37:52.693473 1535974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:37:52.706791 1535974 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:37:52.710393 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.719930 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.838696 1535974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:37:52.855521 1535974 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:37:52.855591 1535974 certs.go:195] generating shared ca certs ...
	I1218 01:37:52.855623 1535974 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.855818 1535974 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:37:52.855904 1535974 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:37:52.855930 1535974 certs.go:257] generating profile certs ...
	I1218 01:37:52.856023 1535974 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:37:52.856067 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt with IP's: []
	I1218 01:37:52.959822 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt ...
	I1218 01:37:52.959911 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt: {Name:mk1478bd753bc1bd23e013e8b566fd65e1f2e1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960142 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key ...
	I1218 01:37:52.960182 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key: {Name:mk3ecbc7ec855c1ebb5deefb951affdfc3f90c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960334 1535974 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:37:52.960379 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 01:37:53.073797 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 ...
	I1218 01:37:53.073831 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056: {Name:mkbff084b54b98d69b985b5f1bd631cb072aabd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074057 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 ...
	I1218 01:37:53.074074 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056: {Name:mkb73e5093692957aa43e022ccaed162c1426b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074169 1535974 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt
	I1218 01:37:53.074248 1535974 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key
	I1218 01:37:53.074307 1535974 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:37:53.074329 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt with IP's: []
	I1218 01:37:53.314103 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt ...
	I1218 01:37:53.314136 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt: {Name:mk54950f9214da12e2d9ae5c67b648894886fbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314331 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key ...
	I1218 01:37:53.314345 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key: {Name:mk2d7b01164454a2df40dfec571544f9e3d23770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314570 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:37:53.314621 1535974 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:37:53.314635 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:37:53.314664 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:37:53.314694 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:37:53.314721 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:37:53.314772 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:53.315353 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:37:53.334028 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:37:53.352910 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:37:53.371116 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:37:53.388896 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:37:53.407154 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:37:53.424768 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:37:53.442432 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:37:53.459693 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:37:53.477104 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:37:53.494473 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:37:53.511694 1535974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:37:53.524605 1535974 ssh_runner.go:195] Run: openssl version
	I1218 01:37:53.531162 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.539159 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:37:53.547088 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550792 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550872 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.592275 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.599906 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.607314 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.614880 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:37:53.622354 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626261 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626329 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.673215 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:37:53.682819 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:37:53.692004 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.703568 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:37:53.718183 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726247 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726314 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.769713 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:37:53.777194 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:37:53.784995 1535974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:37:53.788744 1535974 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:37:53.788807 1535974 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:53.788935 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:37:53.788995 1535974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:37:53.815984 1535974 cri.go:89] found id: ""
	I1218 01:37:53.816075 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:37:53.824897 1535974 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:37:53.834778 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:37:53.834915 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:37:53.843777 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:37:53.843797 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:37:53.843886 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:37:53.851665 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:37:53.851766 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:37:53.859225 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:37:53.867081 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:37:53.867187 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:37:53.874504 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.882220 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:37:53.882286 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.889970 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:37:53.897334 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:37:53.897401 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:37:53.904593 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:37:53.944551 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:37:53.944611 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:37:54.027408 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:37:54.027490 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:37:54.027530 1535974 kubeadm.go:319] OS: Linux
	I1218 01:37:54.027581 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:37:54.027632 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:37:54.027693 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:37:54.027752 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:37:54.027803 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:37:54.027862 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:37:54.027912 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:37:54.027964 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:37:54.028012 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:37:54.097877 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:37:54.097993 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:37:54.098097 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:37:54.105071 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:37:54.111500 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:37:54.111603 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:37:54.111672 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:37:54.530590 1535974 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:37:54.977111 1535974 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:37:55.271802 1535974 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:37:55.800100 1535974 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:37:55.973303 1535974 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:37:55.974317 1535974 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.183207 1535974 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:37:56.183548 1535974 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.263322 1535974 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:37:56.663315 1535974 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:37:56.917852 1535974 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:37:56.918300 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:37:57.144859 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:37:57.575780 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:37:57.878713 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:37:58.333388 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:37:58.732682 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:37:58.733416 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:37:58.737417 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:37:58.741102 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:37:58.741209 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:37:58.741290 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:37:58.741882 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:37:58.757974 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:37:58.758530 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:37:58.766133 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:37:58.766550 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:37:58.766761 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:37:58.901026 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:37:58.901158 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:39:44.779437 1510702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001112678s
	I1218 01:39:44.779500 1510702 kubeadm.go:319] 
	I1218 01:39:44.779569 1510702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:39:44.779604 1510702 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:39:44.779726 1510702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:39:44.779736 1510702 kubeadm.go:319] 
	I1218 01:39:44.779894 1510702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:39:44.779933 1510702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:39:44.779971 1510702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:39:44.779981 1510702 kubeadm.go:319] 
	I1218 01:39:44.784423 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:39:44.784877 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:39:44.784990 1510702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:39:44.785228 1510702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:39:44.785237 1510702 kubeadm.go:319] 
	I1218 01:39:44.785307 1510702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:39:44.785368 1510702 kubeadm.go:403] duration metric: took 8m6.991155077s to StartCluster
	I1218 01:39:44.785429 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:39:44.785502 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:39:44.810447 1510702 cri.go:89] found id: ""
	I1218 01:39:44.810472 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.810482 1510702 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:39:44.810488 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:39:44.810555 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:39:44.839406 1510702 cri.go:89] found id: ""
	I1218 01:39:44.839434 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.839443 1510702 logs.go:284] No container was found matching "etcd"
	I1218 01:39:44.839450 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:39:44.839511 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:39:44.868069 1510702 cri.go:89] found id: ""
	I1218 01:39:44.868096 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.868105 1510702 logs.go:284] No container was found matching "coredns"
	I1218 01:39:44.868111 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:39:44.868169 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:39:44.895127 1510702 cri.go:89] found id: ""
	I1218 01:39:44.895154 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.895163 1510702 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:39:44.895170 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:39:44.895229 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:39:44.922045 1510702 cri.go:89] found id: ""
	I1218 01:39:44.922067 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.922075 1510702 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:39:44.922081 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:39:44.922141 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:39:44.947348 1510702 cri.go:89] found id: ""
	I1218 01:39:44.947371 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.947380 1510702 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:39:44.947386 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:39:44.947445 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:39:44.974747 1510702 cri.go:89] found id: ""
	I1218 01:39:44.974817 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.974841 1510702 logs.go:284] No container was found matching "kindnet"
	I1218 01:39:44.974872 1510702 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:39:44.974904 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:39:45.158574 1510702 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:39:45.158593 1510702 logs.go:123] Gathering logs for containerd ...
	I1218 01:39:45.158606 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:39:45.231899 1510702 logs.go:123] Gathering logs for container status ...
	I1218 01:39:45.231984 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:39:45.274173 1510702 logs.go:123] Gathering logs for kubelet ...
	I1218 01:39:45.274204 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:39:45.347906 1510702 logs.go:123] Gathering logs for dmesg ...
	I1218 01:39:45.347946 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:39:45.367741 1510702 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:39:45.367789 1510702 out.go:285] * 
	W1218 01:39:45.367853 1510702 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.367874 1510702 out.go:285] * 
	W1218 01:39:45.370057 1510702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:39:45.374979 1510702 out.go:203] 
	W1218 01:39:45.378669 1510702 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.378761 1510702 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:39:45.378790 1510702 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:39:45.381944 1510702 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:31:28 no-preload-970975 containerd[759]: time="2025-12-18T01:31:28.470947504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.711596763Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.713869317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.723846633Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.727456559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.796825379Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.799106228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.807433713Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.808922925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.292875381Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.295130606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.303984182Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.305000224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.336005639Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.338266928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.348579276Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.349580951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.488112742Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.491177326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.502169199Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.503038028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.888978136Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.891655209Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901388576Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901784046Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:48.063576    5689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:48.064121    5689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:48.065756    5689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:48.066606    5689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:48.068309    5689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:39:48 up  8:22,  0 user,  load average: 1.47, 2.06, 2.16
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:39:44 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:45 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 18 01:39:45 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:45 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:45 no-preload-970975 kubelet[5457]: E1218 01:39:45.535642    5457 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:45 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:45 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 kubelet[5479]: E1218 01:39:46.210063    5479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 kubelet[5582]: E1218 01:39:46.981122    5582 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:47 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 18 01:39:47 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:47 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:47 no-preload-970975 kubelet[5611]: E1218 01:39:47.715101    5611 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:47 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:47 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 6 (376.035911ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:39:48.547431 1540117 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1511022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:31:17.16290886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1e9bc76dbd04c46d3398cadb3276424663a2b675616e94f670f35547ef4442d",
	            "SandboxKey": "/var/run/docker/netns/e1e9bc76dbd0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:4c:f1:db:47:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "a42f74c81af72816a5096acec3153b345a82e549e666df17a9cd4661c0bfa55d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 6 (311.26444ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:39:48.878468 1540205 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ unpause │ -p old-k8s-version-207212 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-922343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:33 UTC │
	│ stop    │ -p embed-certs-922343 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:35 UTC │
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:37:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:37:41.409265 1535974 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:37:41.409621 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409656 1535974 out.go:374] Setting ErrFile to fd 2...
	I1218 01:37:41.409674 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409955 1535974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:37:41.410413 1535974 out.go:368] Setting JSON to false
	I1218 01:37:41.411299 1535974 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30008,"bootTime":1765991854,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:37:41.411395 1535974 start.go:143] virtualization:  
	I1218 01:37:41.415580 1535974 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:37:41.419867 1535974 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:37:41.419945 1535974 notify.go:221] Checking for updates...
	I1218 01:37:41.426287 1535974 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:37:41.429432 1535974 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:37:41.433605 1535974 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:37:41.436760 1535974 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:37:41.439743 1535974 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:37:41.443485 1535974 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:41.443626 1535974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:37:41.476508 1535974 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:37:41.476682 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.529692 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.519945478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.529801 1535974 docker.go:319] overlay module found
	I1218 01:37:41.533160 1535974 out.go:179] * Using the docker driver based on user configuration
	I1218 01:37:41.536049 1535974 start.go:309] selected driver: docker
	I1218 01:37:41.536071 1535974 start.go:927] validating driver "docker" against <nil>
	I1218 01:37:41.536087 1535974 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:37:41.536903 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.594960 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.586076136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.595118 1535974 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1218 01:37:41.595153 1535974 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1218 01:37:41.595385 1535974 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:37:41.598327 1535974 out.go:179] * Using Docker driver with root privileges
	I1218 01:37:41.601257 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:41.601333 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:41.601345 1535974 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:37:41.601426 1535974 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:41.606414 1535974 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:37:41.609305 1535974 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:37:41.612198 1535974 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:37:41.615045 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:41.615091 1535974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:37:41.615104 1535974 cache.go:65] Caching tarball of preloaded images
	I1218 01:37:41.615136 1535974 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:37:41.615184 1535974 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:37:41.615194 1535974 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:37:41.615294 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:41.615311 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json: {Name:mk1c21bf1c938626eee4c23c85b81bbb6255d680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:41.634234 1535974 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:37:41.634258 1535974 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:37:41.634273 1535974 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:37:41.634304 1535974 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:37:41.634418 1535974 start.go:364] duration metric: took 93.52µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:37:41.634450 1535974 start.go:93] Provisioning new machine with config: &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:37:41.634560 1535974 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:37:41.638056 1535974 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:37:41.638295 1535974 start.go:159] libmachine.API.Create for "newest-cni-120615" (driver="docker")
	I1218 01:37:41.638333 1535974 client.go:173] LocalClient.Create starting
	I1218 01:37:41.638412 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:37:41.638450 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638466 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638528 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:37:41.638549 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638564 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638936 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:37:41.659766 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:37:41.659848 1535974 network_create.go:284] running [docker network inspect newest-cni-120615] to gather additional debugging logs...
	I1218 01:37:41.659883 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615
	W1218 01:37:41.680710 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 returned with exit code 1
	I1218 01:37:41.680751 1535974 network_create.go:287] error running [docker network inspect newest-cni-120615]: docker network inspect newest-cni-120615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-120615 not found
	I1218 01:37:41.680768 1535974 network_create.go:289] output of [docker network inspect newest-cni-120615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-120615 not found
	
	** /stderr **
	I1218 01:37:41.680867 1535974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:41.697958 1535974 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:37:41.698338 1535974 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:37:41.698559 1535974 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:37:41.698831 1535974 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 01:37:41.699243 1535974 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001983860}
	I1218 01:37:41.699261 1535974 network_create.go:124] attempt to create docker network newest-cni-120615 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 01:37:41.699323 1535974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-120615 newest-cni-120615
	I1218 01:37:41.764110 1535974 network_create.go:108] docker network newest-cni-120615 192.168.85.0/24 created
	I1218 01:37:41.764138 1535974 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-120615" container
	I1218 01:37:41.764211 1535974 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:37:41.780305 1535974 cli_runner.go:164] Run: docker volume create newest-cni-120615 --label name.minikube.sigs.k8s.io=newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:37:41.798478 1535974 oci.go:103] Successfully created a docker volume newest-cni-120615
	I1218 01:37:41.798584 1535974 cli_runner.go:164] Run: docker run --rm --name newest-cni-120615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --entrypoint /usr/bin/test -v newest-cni-120615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:37:42.380541 1535974 oci.go:107] Successfully prepared a docker volume newest-cni-120615
	I1218 01:37:42.380617 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:42.380663 1535974 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 01:37:42.380737 1535974 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 01:37:46.199794 1535974 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.819017615s)
	I1218 01:37:46.199835 1535974 kic.go:203] duration metric: took 3.819169809s to extract preloaded images to volume ...
	W1218 01:37:46.199963 1535974 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:37:46.200068 1535974 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:37:46.253384 1535974 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-120615 --name newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-120615 --network newest-cni-120615 --ip 192.168.85.2 --volume newest-cni-120615:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:37:46.551881 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Running}}
	I1218 01:37:46.583903 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.608169 1535974 cli_runner.go:164] Run: docker exec newest-cni-120615 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:37:46.667666 1535974 oci.go:144] the created container "newest-cni-120615" has a running status.
	I1218 01:37:46.667692 1535974 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa...
	I1218 01:37:46.834539 1535974 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:37:46.861844 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.884882 1535974 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:37:46.884908 1535974 kic_runner.go:114] Args: [docker exec --privileged newest-cni-120615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:37:46.942854 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.960511 1535974 machine.go:94] provisionDockerMachine start ...
	I1218 01:37:46.960612 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:46.978530 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:46.978859 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:46.978868 1535974 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:37:46.979490 1535974 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:37:50.148337 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.148363 1535974 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:37:50.148435 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.165796 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.166115 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.166132 1535974 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:37:50.330955 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.331106 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.348111 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.348435 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.348452 1535974 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:37:50.500688 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:37:50.500716 1535974 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:37:50.500744 1535974 ubuntu.go:190] setting up certificates
	I1218 01:37:50.500754 1535974 provision.go:84] configureAuth start
	I1218 01:37:50.500821 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:50.517589 1535974 provision.go:143] copyHostCerts
	I1218 01:37:50.517666 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:37:50.517680 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:37:50.517755 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:37:50.517871 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:37:50.517882 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:37:50.517912 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:37:50.517969 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:37:50.517977 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:37:50.518002 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:37:50.518054 1535974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:37:50.674888 1535974 provision.go:177] copyRemoteCerts
	I1218 01:37:50.674959 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:37:50.675009 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.693570 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.800638 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:37:50.818412 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:37:50.836171 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:37:50.853859 1535974 provision.go:87] duration metric: took 353.089827ms to configureAuth
	I1218 01:37:50.853884 1535974 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:37:50.854091 1535974 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:50.854099 1535974 machine.go:97] duration metric: took 3.893564907s to provisionDockerMachine
	I1218 01:37:50.854106 1535974 client.go:176] duration metric: took 9.215762234s to LocalClient.Create
	I1218 01:37:50.854131 1535974 start.go:167] duration metric: took 9.215836644s to libmachine.API.Create "newest-cni-120615"
	I1218 01:37:50.854140 1535974 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:37:50.854151 1535974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:37:50.854199 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:37:50.854246 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.871379 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.976751 1535974 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:37:50.979800 1535974 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:37:50.979835 1535974 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:37:50.979846 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:37:50.979919 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:37:50.980017 1535974 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:37:50.980118 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:37:50.987435 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:51.010927 1535974 start.go:296] duration metric: took 156.770961ms for postStartSetup
	I1218 01:37:51.011358 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.028989 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:51.029275 1535974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:37:51.029337 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.046033 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.149901 1535974 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:37:51.154841 1535974 start.go:128] duration metric: took 9.520265624s to createHost
	I1218 01:37:51.154870 1535974 start.go:83] releasing machines lock for "newest-cni-120615", held for 9.520437574s
	I1218 01:37:51.154941 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.172452 1535974 ssh_runner.go:195] Run: cat /version.json
	I1218 01:37:51.172506 1535974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:37:51.172521 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.172564 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.192456 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.195325 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.384735 1535974 ssh_runner.go:195] Run: systemctl --version
	I1218 01:37:51.391571 1535974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:37:51.396317 1535974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:37:51.396387 1535974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:37:51.426976 1535974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:37:51.427002 1535974 start.go:496] detecting cgroup driver to use...
	I1218 01:37:51.427045 1535974 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:37:51.427094 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:37:51.443517 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:37:51.461122 1535974 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:37:51.461182 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:37:51.478844 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:37:51.497057 1535974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:37:51.618030 1535974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:37:51.746908 1535974 docker.go:234] disabling docker service ...
	I1218 01:37:51.747041 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:37:51.768317 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:37:51.781980 1535974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:37:51.904322 1535974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:37:52.052799 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:37:52.066888 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:37:52.082976 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:37:52.093587 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:37:52.102930 1535974 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:37:52.103042 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:37:52.112246 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.121385 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:37:52.130577 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.139689 1535974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:37:52.149904 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:37:52.159110 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:37:52.168101 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:37:52.177205 1535974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:37:52.185241 1535974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:37:52.193080 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.308369 1535974 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:37:52.450163 1535974 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:37:52.450242 1535974 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:37:52.454206 1535974 start.go:564] Will wait 60s for crictl version
	I1218 01:37:52.454330 1535974 ssh_runner.go:195] Run: which crictl
	I1218 01:37:52.457885 1535974 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:37:52.482102 1535974 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:37:52.482223 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.502684 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.526110 1535974 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:37:52.529020 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:52.546624 1535974 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:37:52.550634 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.563708 1535974 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:37:52.566648 1535974 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:37:52.566803 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:52.566895 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.591897 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.591927 1535974 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:37:52.592017 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.621212 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.621242 1535974 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:37:52.621251 1535974 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:37:52.621346 1535974 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:37:52.621421 1535974 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:37:52.651981 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:52.652006 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:52.652029 1535974 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:37:52.652053 1535974 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:37:52.652168 1535974 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:37:52.652238 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:37:52.659908 1535974 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:37:52.660006 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:37:52.667532 1535974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:37:52.680138 1535974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:37:52.693473 1535974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:37:52.706791 1535974 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:37:52.710393 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.719930 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.838696 1535974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:37:52.855521 1535974 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:37:52.855591 1535974 certs.go:195] generating shared ca certs ...
	I1218 01:37:52.855623 1535974 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.855818 1535974 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:37:52.855904 1535974 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:37:52.855930 1535974 certs.go:257] generating profile certs ...
	I1218 01:37:52.856023 1535974 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:37:52.856067 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt with IP's: []
	I1218 01:37:52.959822 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt ...
	I1218 01:37:52.959911 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt: {Name:mk1478bd753bc1bd23e013e8b566fd65e1f2e1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960142 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key ...
	I1218 01:37:52.960182 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key: {Name:mk3ecbc7ec855c1ebb5deefb951affdfc3f90c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960334 1535974 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:37:52.960379 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 01:37:53.073797 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 ...
	I1218 01:37:53.073831 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056: {Name:mkbff084b54b98d69b985b5f1bd631cb072aabd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074057 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 ...
	I1218 01:37:53.074074 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056: {Name:mkb73e5093692957aa43e022ccaed162c1426b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074169 1535974 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt
	I1218 01:37:53.074248 1535974 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key
	I1218 01:37:53.074307 1535974 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:37:53.074329 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt with IP's: []
	I1218 01:37:53.314103 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt ...
	I1218 01:37:53.314136 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt: {Name:mk54950f9214da12e2d9ae5c67b648894886fbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314331 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key ...
	I1218 01:37:53.314345 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key: {Name:mk2d7b01164454a2df40dfec571544f9e3d23770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314570 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:37:53.314621 1535974 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:37:53.314635 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:37:53.314664 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:37:53.314694 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:37:53.314721 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:37:53.314772 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:53.315353 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:37:53.334028 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:37:53.352910 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:37:53.371116 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:37:53.388896 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:37:53.407154 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:37:53.424768 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:37:53.442432 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:37:53.459693 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:37:53.477104 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:37:53.494473 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:37:53.511694 1535974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:37:53.524605 1535974 ssh_runner.go:195] Run: openssl version
	I1218 01:37:53.531162 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.539159 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:37:53.547088 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550792 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550872 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.592275 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.599906 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.607314 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.614880 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:37:53.622354 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626261 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626329 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.673215 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:37:53.682819 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:37:53.692004 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.703568 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:37:53.718183 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726247 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726314 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.769713 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:37:53.777194 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:37:53.784995 1535974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:37:53.788744 1535974 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:37:53.788807 1535974 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:53.788935 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:37:53.788995 1535974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:37:53.815984 1535974 cri.go:89] found id: ""
	I1218 01:37:53.816075 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:37:53.824897 1535974 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:37:53.834778 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:37:53.834915 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:37:53.843777 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:37:53.843797 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:37:53.843886 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:37:53.851665 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:37:53.851766 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:37:53.859225 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:37:53.867081 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:37:53.867187 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:37:53.874504 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.882220 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:37:53.882286 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.889970 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:37:53.897334 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:37:53.897401 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:37:53.904593 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:37:53.944551 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:37:53.944611 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:37:54.027408 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:37:54.027490 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:37:54.027530 1535974 kubeadm.go:319] OS: Linux
	I1218 01:37:54.027581 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:37:54.027632 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:37:54.027693 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:37:54.027752 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:37:54.027803 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:37:54.027862 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:37:54.027912 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:37:54.027964 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:37:54.028012 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:37:54.097877 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:37:54.097993 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:37:54.098097 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:37:54.105071 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:37:54.111500 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:37:54.111603 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:37:54.111672 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:37:54.530590 1535974 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:37:54.977111 1535974 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:37:55.271802 1535974 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:37:55.800100 1535974 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:37:55.973303 1535974 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:37:55.974317 1535974 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.183207 1535974 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:37:56.183548 1535974 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.263322 1535974 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:37:56.663315 1535974 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:37:56.917852 1535974 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:37:56.918300 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:37:57.144859 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:37:57.575780 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:37:57.878713 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:37:58.333388 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:37:58.732682 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:37:58.733416 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:37:58.737417 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:37:58.741102 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:37:58.741209 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:37:58.741290 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:37:58.741882 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:37:58.757974 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:37:58.758530 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:37:58.766133 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:37:58.766550 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:37:58.766761 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:37:58.901026 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:37:58.901158 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:39:44.779437 1510702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001112678s
	I1218 01:39:44.779500 1510702 kubeadm.go:319] 
	I1218 01:39:44.779569 1510702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:39:44.779604 1510702 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:39:44.779726 1510702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:39:44.779736 1510702 kubeadm.go:319] 
	I1218 01:39:44.779894 1510702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:39:44.779933 1510702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:39:44.779971 1510702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:39:44.779981 1510702 kubeadm.go:319] 
	I1218 01:39:44.784423 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:39:44.784877 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:39:44.784990 1510702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:39:44.785228 1510702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:39:44.785237 1510702 kubeadm.go:319] 
	I1218 01:39:44.785307 1510702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:39:44.785368 1510702 kubeadm.go:403] duration metric: took 8m6.991155077s to StartCluster
	I1218 01:39:44.785429 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:39:44.785502 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:39:44.810447 1510702 cri.go:89] found id: ""
	I1218 01:39:44.810472 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.810482 1510702 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:39:44.810488 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:39:44.810555 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:39:44.839406 1510702 cri.go:89] found id: ""
	I1218 01:39:44.839434 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.839443 1510702 logs.go:284] No container was found matching "etcd"
	I1218 01:39:44.839450 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:39:44.839511 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:39:44.868069 1510702 cri.go:89] found id: ""
	I1218 01:39:44.868096 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.868105 1510702 logs.go:284] No container was found matching "coredns"
	I1218 01:39:44.868111 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:39:44.868169 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:39:44.895127 1510702 cri.go:89] found id: ""
	I1218 01:39:44.895154 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.895163 1510702 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:39:44.895170 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:39:44.895229 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:39:44.922045 1510702 cri.go:89] found id: ""
	I1218 01:39:44.922067 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.922075 1510702 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:39:44.922081 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:39:44.922141 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:39:44.947348 1510702 cri.go:89] found id: ""
	I1218 01:39:44.947371 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.947380 1510702 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:39:44.947386 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:39:44.947445 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:39:44.974747 1510702 cri.go:89] found id: ""
	I1218 01:39:44.974817 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.974841 1510702 logs.go:284] No container was found matching "kindnet"
	I1218 01:39:44.974872 1510702 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:39:44.974904 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:39:45.158574 1510702 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:39:45.158593 1510702 logs.go:123] Gathering logs for containerd ...
	I1218 01:39:45.158606 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:39:45.231899 1510702 logs.go:123] Gathering logs for container status ...
	I1218 01:39:45.231984 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:39:45.274173 1510702 logs.go:123] Gathering logs for kubelet ...
	I1218 01:39:45.274204 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:39:45.347906 1510702 logs.go:123] Gathering logs for dmesg ...
	I1218 01:39:45.347946 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:39:45.367741 1510702 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:39:45.367789 1510702 out.go:285] * 
	W1218 01:39:45.367853 1510702 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.367874 1510702 out.go:285] * 
	W1218 01:39:45.370057 1510702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:39:45.374979 1510702 out.go:203] 
	W1218 01:39:45.378669 1510702 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.378761 1510702 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:39:45.378790 1510702 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:39:45.381944 1510702 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:31:28 no-preload-970975 containerd[759]: time="2025-12-18T01:31:28.470947504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.711596763Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.713869317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.723846633Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.727456559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.796825379Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.799106228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.807433713Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.808922925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.292875381Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.295130606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.303984182Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.305000224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.336005639Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.338266928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.348579276Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.349580951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.488112742Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.491177326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.502169199Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.503038028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.888978136Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.891655209Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901388576Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901784046Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:49.518944    5821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:49.519465    5821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:49.521075    5821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:49.521696    5821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:49.523237    5821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:39:49 up  8:22,  0 user,  load average: 1.84, 2.13, 2.18
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:46 no-preload-970975 kubelet[5582]: E1218 01:39:46.981122    5582 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:46 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:47 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 18 01:39:47 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:47 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:47 no-preload-970975 kubelet[5611]: E1218 01:39:47.715101    5611 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:47 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:47 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:48 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 18 01:39:48 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:48 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:48 no-preload-970975 kubelet[5705]: E1218 01:39:48.471536    5705 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:48 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:48 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:39:49 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 18 01:39:49 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:49 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:39:49 no-preload-970975 kubelet[5745]: E1218 01:39:49.215358    5745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:39:49 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:39:49 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 6 (353.594213ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:39:49.974315 1540424 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (85.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1218 01:40:04.395694 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:11.683888 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:11.690297 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:11.701637 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:11.722998 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:11.764495 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:11.846005 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:12.007656 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:12.329433 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:12.971586 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m24.258314291s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-970975 describe deploy/metrics-server -n kube-system
E1218 01:41:14.255110 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-970975 describe deploy/metrics-server -n kube-system: exit status 1 (55.540622ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-970975" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-970975 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1511022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:31:17.16290886Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1e9bc76dbd04c46d3398cadb3276424663a2b675616e94f670f35547ef4442d",
	            "SandboxKey": "/var/run/docker/netns/e1e9bc76dbd0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:4c:f1:db:47:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "a42f74c81af72816a5096acec3153b345a82e549e666df17a9cd4661c0bfa55d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 6 (332.053182ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:41:14.631649 1541943 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ delete  │ -p old-k8s-version-207212                                                                                                                                                                                                                                │ old-k8s-version-207212       │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:32 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:32 UTC │ 18 Dec 25 01:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-922343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:33 UTC │
	│ stop    │ -p embed-certs-922343 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:35 UTC │
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:37:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:37:41.409265 1535974 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:37:41.409621 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409656 1535974 out.go:374] Setting ErrFile to fd 2...
	I1218 01:37:41.409674 1535974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:37:41.409955 1535974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:37:41.410413 1535974 out.go:368] Setting JSON to false
	I1218 01:37:41.411299 1535974 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30008,"bootTime":1765991854,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:37:41.411395 1535974 start.go:143] virtualization:  
	I1218 01:37:41.415580 1535974 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:37:41.419867 1535974 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:37:41.419945 1535974 notify.go:221] Checking for updates...
	I1218 01:37:41.426287 1535974 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:37:41.429432 1535974 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:37:41.433605 1535974 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:37:41.436760 1535974 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:37:41.439743 1535974 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:37:41.443485 1535974 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:41.443626 1535974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:37:41.476508 1535974 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:37:41.476682 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.529692 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.519945478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.529801 1535974 docker.go:319] overlay module found
	I1218 01:37:41.533160 1535974 out.go:179] * Using the docker driver based on user configuration
	I1218 01:37:41.536049 1535974 start.go:309] selected driver: docker
	I1218 01:37:41.536071 1535974 start.go:927] validating driver "docker" against <nil>
	I1218 01:37:41.536087 1535974 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:37:41.536903 1535974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:37:41.594960 1535974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:37:41.586076136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:37:41.595118 1535974 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1218 01:37:41.595153 1535974 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1218 01:37:41.595385 1535974 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:37:41.598327 1535974 out.go:179] * Using Docker driver with root privileges
	I1218 01:37:41.601257 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:41.601333 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:41.601345 1535974 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:37:41.601426 1535974 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:41.606414 1535974 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:37:41.609305 1535974 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:37:41.612198 1535974 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:37:41.615045 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:41.615091 1535974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:37:41.615104 1535974 cache.go:65] Caching tarball of preloaded images
	I1218 01:37:41.615136 1535974 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:37:41.615184 1535974 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:37:41.615194 1535974 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:37:41.615294 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:41.615311 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json: {Name:mk1c21bf1c938626eee4c23c85b81bbb6255d680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:41.634234 1535974 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:37:41.634258 1535974 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:37:41.634273 1535974 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:37:41.634304 1535974 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:37:41.634418 1535974 start.go:364] duration metric: took 93.52µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:37:41.634450 1535974 start.go:93] Provisioning new machine with config: &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:37:41.634560 1535974 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:37:41.638056 1535974 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:37:41.638295 1535974 start.go:159] libmachine.API.Create for "newest-cni-120615" (driver="docker")
	I1218 01:37:41.638333 1535974 client.go:173] LocalClient.Create starting
	I1218 01:37:41.638412 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:37:41.638450 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638466 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638528 1535974 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:37:41.638549 1535974 main.go:143] libmachine: Decoding PEM data...
	I1218 01:37:41.638564 1535974 main.go:143] libmachine: Parsing certificate...
	I1218 01:37:41.638936 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:37:41.659766 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:37:41.659848 1535974 network_create.go:284] running [docker network inspect newest-cni-120615] to gather additional debugging logs...
	I1218 01:37:41.659883 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615
	W1218 01:37:41.680710 1535974 cli_runner.go:211] docker network inspect newest-cni-120615 returned with exit code 1
	I1218 01:37:41.680751 1535974 network_create.go:287] error running [docker network inspect newest-cni-120615]: docker network inspect newest-cni-120615: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-120615 not found
	I1218 01:37:41.680768 1535974 network_create.go:289] output of [docker network inspect newest-cni-120615]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-120615 not found
	
	** /stderr **
	I1218 01:37:41.680867 1535974 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:41.697958 1535974 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:37:41.698338 1535974 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:37:41.698559 1535974 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:37:41.698831 1535974 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 01:37:41.699243 1535974 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001983860}
	I1218 01:37:41.699261 1535974 network_create.go:124] attempt to create docker network newest-cni-120615 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 01:37:41.699323 1535974 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-120615 newest-cni-120615
	I1218 01:37:41.764110 1535974 network_create.go:108] docker network newest-cni-120615 192.168.85.0/24 created
	I1218 01:37:41.764138 1535974 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-120615" container
	I1218 01:37:41.764211 1535974 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:37:41.780305 1535974 cli_runner.go:164] Run: docker volume create newest-cni-120615 --label name.minikube.sigs.k8s.io=newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:37:41.798478 1535974 oci.go:103] Successfully created a docker volume newest-cni-120615
	I1218 01:37:41.798584 1535974 cli_runner.go:164] Run: docker run --rm --name newest-cni-120615-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --entrypoint /usr/bin/test -v newest-cni-120615:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:37:42.380541 1535974 oci.go:107] Successfully prepared a docker volume newest-cni-120615
	I1218 01:37:42.380617 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:42.380663 1535974 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 01:37:42.380737 1535974 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 01:37:46.199794 1535974 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-120615:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.819017615s)
	I1218 01:37:46.199835 1535974 kic.go:203] duration metric: took 3.819169809s to extract preloaded images to volume ...
	W1218 01:37:46.199963 1535974 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:37:46.200068 1535974 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:37:46.253384 1535974 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-120615 --name newest-cni-120615 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-120615 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-120615 --network newest-cni-120615 --ip 192.168.85.2 --volume newest-cni-120615:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:37:46.551881 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Running}}
	I1218 01:37:46.583903 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.608169 1535974 cli_runner.go:164] Run: docker exec newest-cni-120615 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:37:46.667666 1535974 oci.go:144] the created container "newest-cni-120615" has a running status.
	I1218 01:37:46.667692 1535974 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa...
	I1218 01:37:46.834539 1535974 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:37:46.861844 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.884882 1535974 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:37:46.884908 1535974 kic_runner.go:114] Args: [docker exec --privileged newest-cni-120615 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:37:46.942854 1535974 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:37:46.960511 1535974 machine.go:94] provisionDockerMachine start ...
	I1218 01:37:46.960612 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:46.978530 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:46.978859 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:46.978868 1535974 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:37:46.979490 1535974 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:37:50.148337 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.148363 1535974 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:37:50.148435 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.165796 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.166115 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.166132 1535974 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:37:50.330955 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:37:50.331106 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.348111 1535974 main.go:143] libmachine: Using SSH client type: native
	I1218 01:37:50.348435 1535974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1218 01:37:50.348452 1535974 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:37:50.500688 1535974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:37:50.500716 1535974 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:37:50.500744 1535974 ubuntu.go:190] setting up certificates
	I1218 01:37:50.500754 1535974 provision.go:84] configureAuth start
	I1218 01:37:50.500821 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:50.517589 1535974 provision.go:143] copyHostCerts
	I1218 01:37:50.517666 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:37:50.517680 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:37:50.517755 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:37:50.517871 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:37:50.517882 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:37:50.517912 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:37:50.517969 1535974 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:37:50.517977 1535974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:37:50.518002 1535974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:37:50.518054 1535974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:37:50.674888 1535974 provision.go:177] copyRemoteCerts
	I1218 01:37:50.674959 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:37:50.675009 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.693570 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.800638 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:37:50.818412 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:37:50.836171 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:37:50.853859 1535974 provision.go:87] duration metric: took 353.089827ms to configureAuth
	I1218 01:37:50.853884 1535974 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:37:50.854091 1535974 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:37:50.854099 1535974 machine.go:97] duration metric: took 3.893564907s to provisionDockerMachine
	I1218 01:37:50.854106 1535974 client.go:176] duration metric: took 9.215762234s to LocalClient.Create
	I1218 01:37:50.854131 1535974 start.go:167] duration metric: took 9.215836644s to libmachine.API.Create "newest-cni-120615"
	I1218 01:37:50.854140 1535974 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:37:50.854151 1535974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:37:50.854199 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:37:50.854246 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:50.871379 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:50.976751 1535974 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:37:50.979800 1535974 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:37:50.979835 1535974 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:37:50.979846 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:37:50.979919 1535974 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:37:50.980017 1535974 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:37:50.980118 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:37:50.987435 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:51.010927 1535974 start.go:296] duration metric: took 156.770961ms for postStartSetup
	I1218 01:37:51.011358 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.028989 1535974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:37:51.029275 1535974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:37:51.029337 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.046033 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.149901 1535974 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:37:51.154841 1535974 start.go:128] duration metric: took 9.520265624s to createHost
	I1218 01:37:51.154870 1535974 start.go:83] releasing machines lock for "newest-cni-120615", held for 9.520437574s
	I1218 01:37:51.154941 1535974 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:37:51.172452 1535974 ssh_runner.go:195] Run: cat /version.json
	I1218 01:37:51.172506 1535974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:37:51.172521 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.172564 1535974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:37:51.192456 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.195325 1535974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:37:51.384735 1535974 ssh_runner.go:195] Run: systemctl --version
	I1218 01:37:51.391571 1535974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:37:51.396317 1535974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:37:51.396387 1535974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:37:51.426976 1535974 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:37:51.427002 1535974 start.go:496] detecting cgroup driver to use...
	I1218 01:37:51.427045 1535974 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:37:51.427094 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:37:51.443517 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:37:51.461122 1535974 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:37:51.461182 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:37:51.478844 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:37:51.497057 1535974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:37:51.618030 1535974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:37:51.746908 1535974 docker.go:234] disabling docker service ...
	I1218 01:37:51.747041 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:37:51.768317 1535974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:37:51.781980 1535974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:37:51.904322 1535974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:37:52.052799 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:37:52.066888 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:37:52.082976 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:37:52.093587 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:37:52.102930 1535974 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:37:52.103042 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:37:52.112246 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.121385 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:37:52.130577 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:37:52.139689 1535974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:37:52.149904 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:37:52.159110 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:37:52.168101 1535974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:37:52.177205 1535974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:37:52.185241 1535974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:37:52.193080 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.308369 1535974 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:37:52.450163 1535974 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:37:52.450242 1535974 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:37:52.454206 1535974 start.go:564] Will wait 60s for crictl version
	I1218 01:37:52.454330 1535974 ssh_runner.go:195] Run: which crictl
	I1218 01:37:52.457885 1535974 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:37:52.482102 1535974 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:37:52.482223 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.502684 1535974 ssh_runner.go:195] Run: containerd --version
	I1218 01:37:52.526110 1535974 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:37:52.529020 1535974 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:37:52.546624 1535974 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:37:52.550634 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.563708 1535974 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:37:52.566648 1535974 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:37:52.566803 1535974 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:37:52.566895 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.591897 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.591927 1535974 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:37:52.592017 1535974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:37:52.621212 1535974 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:37:52.621242 1535974 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:37:52.621251 1535974 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:37:52.621346 1535974 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:37:52.621421 1535974 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:37:52.651981 1535974 cni.go:84] Creating CNI manager for ""
	I1218 01:37:52.652006 1535974 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:37:52.652029 1535974 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:37:52.652053 1535974 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:37:52.652168 1535974 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:37:52.652238 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:37:52.659908 1535974 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:37:52.660006 1535974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:37:52.667532 1535974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:37:52.680138 1535974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:37:52.693473 1535974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:37:52.706791 1535974 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:37:52.710393 1535974 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:37:52.719930 1535974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:37:52.838696 1535974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:37:52.855521 1535974 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:37:52.855591 1535974 certs.go:195] generating shared ca certs ...
	I1218 01:37:52.855623 1535974 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.855818 1535974 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:37:52.855904 1535974 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:37:52.855930 1535974 certs.go:257] generating profile certs ...
	I1218 01:37:52.856023 1535974 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:37:52.856067 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt with IP's: []
	I1218 01:37:52.959822 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt ...
	I1218 01:37:52.959911 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.crt: {Name:mk1478bd753bc1bd23e013e8b566fd65e1f2e1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960142 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key ...
	I1218 01:37:52.960182 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key: {Name:mk3ecbc7ec855c1ebb5deefb951affdfc3f90c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:52.960334 1535974 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:37:52.960379 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 01:37:53.073797 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 ...
	I1218 01:37:53.073831 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056: {Name:mkbff084b54b98d69b985b5f1bd631cb072aabd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074057 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 ...
	I1218 01:37:53.074074 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056: {Name:mkb73e5093692957aa43e022ccaed162c1426b9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.074169 1535974 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt
	I1218 01:37:53.074248 1535974 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key
	I1218 01:37:53.074307 1535974 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:37:53.074329 1535974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt with IP's: []
	I1218 01:37:53.314103 1535974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt ...
	I1218 01:37:53.314136 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt: {Name:mk54950f9214da12e2d9ae5c67b648894886fbc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314331 1535974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key ...
	I1218 01:37:53.314345 1535974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key: {Name:mk2d7b01164454a2df40dfec571544f9e3d23770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:37:53.314570 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:37:53.314621 1535974 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:37:53.314635 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:37:53.314664 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:37:53.314694 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:37:53.314721 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:37:53.314772 1535974 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:37:53.315353 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:37:53.334028 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:37:53.352910 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:37:53.371116 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:37:53.388896 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:37:53.407154 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:37:53.424768 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:37:53.442432 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:37:53.459693 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:37:53.477104 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:37:53.494473 1535974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:37:53.511694 1535974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:37:53.524605 1535974 ssh_runner.go:195] Run: openssl version
	I1218 01:37:53.531162 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.539159 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:37:53.547088 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550792 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.550872 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:37:53.592275 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.599906 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:37:53.607314 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.614880 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:37:53.622354 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626261 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.626329 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:37:53.673215 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:37:53.682819 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:37:53.692004 1535974 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.703568 1535974 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:37:53.718183 1535974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726247 1535974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.726314 1535974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:37:53.769713 1535974 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:37:53.777194 1535974 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:37:53.784995 1535974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:37:53.788744 1535974 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:37:53.788807 1535974 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:37:53.788935 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:37:53.788995 1535974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:37:53.815984 1535974 cri.go:89] found id: ""
	I1218 01:37:53.816075 1535974 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:37:53.824897 1535974 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:37:53.834778 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:37:53.834915 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:37:53.843777 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:37:53.843797 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:37:53.843886 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:37:53.851665 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:37:53.851766 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:37:53.859225 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:37:53.867081 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:37:53.867187 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:37:53.874504 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.882220 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:37:53.882286 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:37:53.889970 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:37:53.897334 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:37:53.897401 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:37:53.904593 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:37:53.944551 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:37:53.944611 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:37:54.027408 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:37:54.027490 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:37:54.027530 1535974 kubeadm.go:319] OS: Linux
	I1218 01:37:54.027581 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:37:54.027632 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:37:54.027693 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:37:54.027752 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:37:54.027803 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:37:54.027862 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:37:54.027912 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:37:54.027964 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:37:54.028012 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:37:54.097877 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:37:54.097993 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:37:54.098097 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:37:54.105071 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:37:54.111500 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:37:54.111603 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:37:54.111672 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:37:54.530590 1535974 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:37:54.977111 1535974 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:37:55.271802 1535974 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:37:55.800100 1535974 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:37:55.973303 1535974 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:37:55.974317 1535974 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.183207 1535974 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:37:56.183548 1535974 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:37:56.263322 1535974 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:37:56.663315 1535974 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:37:56.917852 1535974 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:37:56.918300 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:37:57.144859 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:37:57.575780 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:37:57.878713 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:37:58.333388 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:37:58.732682 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:37:58.733416 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:37:58.737417 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:37:58.741102 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:37:58.741209 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:37:58.741290 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:37:58.741882 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:37:58.757974 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:37:58.758530 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:37:58.766133 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:37:58.766550 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:37:58.766761 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:37:58.901026 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:37:58.901158 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:39:44.779437 1510702 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001112678s
	I1218 01:39:44.779500 1510702 kubeadm.go:319] 
	I1218 01:39:44.779569 1510702 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:39:44.779604 1510702 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:39:44.779726 1510702 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:39:44.779736 1510702 kubeadm.go:319] 
	I1218 01:39:44.779894 1510702 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:39:44.779933 1510702 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:39:44.779971 1510702 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:39:44.779981 1510702 kubeadm.go:319] 
	I1218 01:39:44.784423 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:39:44.784877 1510702 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:39:44.784990 1510702 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:39:44.785228 1510702 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:39:44.785237 1510702 kubeadm.go:319] 
	I1218 01:39:44.785307 1510702 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:39:44.785368 1510702 kubeadm.go:403] duration metric: took 8m6.991155077s to StartCluster
	I1218 01:39:44.785429 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:39:44.785502 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:39:44.810447 1510702 cri.go:89] found id: ""
	I1218 01:39:44.810472 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.810482 1510702 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:39:44.810488 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:39:44.810555 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:39:44.839406 1510702 cri.go:89] found id: ""
	I1218 01:39:44.839434 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.839443 1510702 logs.go:284] No container was found matching "etcd"
	I1218 01:39:44.839450 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:39:44.839511 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:39:44.868069 1510702 cri.go:89] found id: ""
	I1218 01:39:44.868096 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.868105 1510702 logs.go:284] No container was found matching "coredns"
	I1218 01:39:44.868111 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:39:44.868169 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:39:44.895127 1510702 cri.go:89] found id: ""
	I1218 01:39:44.895154 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.895163 1510702 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:39:44.895170 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:39:44.895229 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:39:44.922045 1510702 cri.go:89] found id: ""
	I1218 01:39:44.922067 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.922075 1510702 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:39:44.922081 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:39:44.922141 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:39:44.947348 1510702 cri.go:89] found id: ""
	I1218 01:39:44.947371 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.947380 1510702 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:39:44.947386 1510702 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:39:44.947445 1510702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:39:44.974747 1510702 cri.go:89] found id: ""
	I1218 01:39:44.974817 1510702 logs.go:282] 0 containers: []
	W1218 01:39:44.974841 1510702 logs.go:284] No container was found matching "kindnet"
	I1218 01:39:44.974872 1510702 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:39:44.974904 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:39:45.158574 1510702 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:39:45.144792    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.145613    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.148682    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.149159    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:39:45.153644    5433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:39:45.158593 1510702 logs.go:123] Gathering logs for containerd ...
	I1218 01:39:45.158606 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:39:45.231899 1510702 logs.go:123] Gathering logs for container status ...
	I1218 01:39:45.231984 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:39:45.274173 1510702 logs.go:123] Gathering logs for kubelet ...
	I1218 01:39:45.274204 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:39:45.347906 1510702 logs.go:123] Gathering logs for dmesg ...
	I1218 01:39:45.347946 1510702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:39:45.367741 1510702 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:39:45.367789 1510702 out.go:285] * 
	W1218 01:39:45.367853 1510702 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.367874 1510702 out.go:285] * 
	W1218 01:39:45.370057 1510702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:39:45.374979 1510702 out.go:203] 
	W1218 01:39:45.378669 1510702 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112678s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:39:45.378761 1510702 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:39:45.378790 1510702 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:39:45.381944 1510702 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:31:28 no-preload-970975 containerd[759]: time="2025-12-18T01:31:28.470947504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.711596763Z" level=info msg="No images store for sha256:93523640e0a56d4e8b1c8a3497b218ff0cad45dc41c5de367125514543645a73"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.713869317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\""
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.723846633Z" level=info msg="ImageCreate event name:\"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:29 no-preload-970975 containerd[759]: time="2025-12-18T01:31:29.727456559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.796825379Z" level=info msg="No images store for sha256:e78123e3dd3a833d4e1feffb3fc0a121f3dd689abacf9b7f8984f026b95c56ec"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.799106228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\""
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.807433713Z" level=info msg="ImageCreate event name:\"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:30 no-preload-970975 containerd[759]: time="2025-12-18T01:31:30.808922925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.292875381Z" level=info msg="No images store for sha256:78d3927c747311a5af27ec923ab6d07a2c1ad9cff4754323abf6c5c08cf054a5"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.295130606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\""
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.303984182Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:32 no-preload-970975 containerd[759]: time="2025-12-18T01:31:32.305000224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.336005639Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.338266928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.348579276Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:33 no-preload-970975 containerd[759]: time="2025-12-18T01:31:33.349580951Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.488112742Z" level=info msg="No images store for sha256:90c4ca45066b118d6cc8f6102ba2fea77739b71c04f0bdafeef225127738ea35"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.491177326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.502169199Z" level=info msg="ImageCreate event name:\"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.503038028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-rc.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.888978136Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.891655209Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901388576Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 18 01:31:34 no-preload-970975 containerd[759]: time="2025-12-18T01:31:34.901784046Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:41:15.326339    6712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:41:15.326998    6712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:41:15.327964    6712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:41:15.329469    6712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:41:15.329783    6712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:41:15 up  8:23,  0 user,  load average: 0.72, 1.68, 2.02
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:41:12 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:12 no-preload-970975 kubelet[6588]: E1218 01:41:12.458928    6588 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:41:12 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:41:12 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:41:13 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 437.
	Dec 18 01:41:13 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:13 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:13 no-preload-970975 kubelet[6594]: E1218 01:41:13.213486    6594 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:41:13 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:41:13 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:41:13 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 438.
	Dec 18 01:41:13 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:13 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:13 no-preload-970975 kubelet[6601]: E1218 01:41:13.968734    6601 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:41:13 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:41:13 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:41:14 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 439.
	Dec 18 01:41:14 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:14 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:14 no-preload-970975 kubelet[6627]: E1218 01:41:14.756145    6627 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:41:14 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:41:14 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:41:15 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 440.
	Dec 18 01:41:15 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:41:15 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 6 (345.306458ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:41:15.827048 1542173 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (85.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1218 01:41:21.939309 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:28.299785 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:32.181483 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:43.270121 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:41:52.663128 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:42:10.972377 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:42:33.625566 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:42:40.446979 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:42:57.378478 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:43:25.214933 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:43:55.547892 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:45:04.395247 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m8.111637859s)

                                                
                                                
-- stdout --
	* [no-preload-970975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-970975" primary control-plane node in "no-preload-970975" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:41:17.364681 1542458 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:41:17.364846 1542458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:41:17.364875 1542458 out.go:374] Setting ErrFile to fd 2...
	I1218 01:41:17.364894 1542458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:41:17.365168 1542458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:41:17.365597 1542458 out.go:368] Setting JSON to false
	I1218 01:41:17.366532 1542458 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30224,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:41:17.366626 1542458 start.go:143] virtualization:  
	I1218 01:41:17.369453 1542458 out.go:179] * [no-preload-970975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:41:17.373146 1542458 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:41:17.373244 1542458 notify.go:221] Checking for updates...
	I1218 01:41:17.378986 1542458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:41:17.381940 1542458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:17.384732 1542458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:41:17.387579 1542458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:41:17.390446 1542458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:41:17.393789 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:17.394396 1542458 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:41:17.426513 1542458 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:41:17.426640 1542458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:41:17.488029 1542458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:41:17.478703453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:41:17.488135 1542458 docker.go:319] overlay module found
	I1218 01:41:17.491211 1542458 out.go:179] * Using the docker driver based on existing profile
	I1218 01:41:17.494107 1542458 start.go:309] selected driver: docker
	I1218 01:41:17.494124 1542458 start.go:927] validating driver "docker" against &{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:17.494227 1542458 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:41:17.494955 1542458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:41:17.562043 1542458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:41:17.552976354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:41:17.562397 1542458 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 01:41:17.562433 1542458 cni.go:84] Creating CNI manager for ""
	I1218 01:41:17.562482 1542458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:41:17.562540 1542458 start.go:353] cluster config:
	{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:17.565742 1542458 out.go:179] * Starting "no-preload-970975" primary control-plane node in "no-preload-970975" cluster
	I1218 01:41:17.568662 1542458 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:41:17.571552 1542458 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:41:17.574233 1542458 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:41:17.574310 1542458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:41:17.574357 1542458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:41:17.574663 1542458 cache.go:107] acquiring lock: {Name:mkbe76c9f71177ead8df5bdae626dba72c24e88a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574752 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1218 01:41:17.574760 1542458 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.281µs
	I1218 01:41:17.574771 1542458 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1218 01:41:17.574783 1542458 cache.go:107] acquiring lock: {Name:mk73deadf102b9ef2729ab344cb753d1e81c8e69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574814 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1218 01:41:17.574818 1542458 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 36.988µs
	I1218 01:41:17.574825 1542458 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574834 1542458 cache.go:107] acquiring lock: {Name:mk08959f4f9aec2f8cb7736193533393f169491b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574861 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1218 01:41:17.574866 1542458 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32.787µs
	I1218 01:41:17.574871 1542458 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574881 1542458 cache.go:107] acquiring lock: {Name:mk51756ddbebcd3ad705096b7bac91c4370ab67f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574908 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1218 01:41:17.574913 1542458 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 32.615µs
	I1218 01:41:17.574918 1542458 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574927 1542458 cache.go:107] acquiring lock: {Name:mkf6c55bc605708b579c41afc97203c8d4e81ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574954 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1218 01:41:17.574958 1542458 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 32.934µs
	I1218 01:41:17.574964 1542458 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574972 1542458 cache.go:107] acquiring lock: {Name:mk1ebccb0216e63c057736909b9d1bea2501f43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575000 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1218 01:41:17.575005 1542458 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 34.018µs
	I1218 01:41:17.575011 1542458 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1218 01:41:17.575028 1542458 cache.go:107] acquiring lock: {Name:mk273a40d27e5765473ae1c9ccf1347edbca61c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575052 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1218 01:41:17.575056 1542458 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 29.734µs
	I1218 01:41:17.575061 1542458 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1218 01:41:17.575071 1542458 cache.go:107] acquiring lock: {Name:mkb0d564e902314f0008f6dd25799cc8c98892bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575096 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1218 01:41:17.575101 1542458 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.319µs
	I1218 01:41:17.575107 1542458 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1218 01:41:17.575113 1542458 cache.go:87] Successfully saved all images to host disk.
	I1218 01:41:17.593931 1542458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:41:17.593955 1542458 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:41:17.593976 1542458 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:41:17.594007 1542458 start.go:360] acquireMachinesLock for no-preload-970975: {Name:mkc5466bd6e57a370f52d05d09914f47211c2efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.594062 1542458 start.go:364] duration metric: took 35.782µs to acquireMachinesLock for "no-preload-970975"
	I1218 01:41:17.594089 1542458 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:41:17.594095 1542458 fix.go:54] fixHost starting: 
	I1218 01:41:17.594362 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:17.612849 1542458 fix.go:112] recreateIfNeeded on no-preload-970975: state=Stopped err=<nil>
	W1218 01:41:17.612890 1542458 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:41:17.616118 1542458 out.go:252] * Restarting existing docker container for "no-preload-970975" ...
	I1218 01:41:17.616203 1542458 cli_runner.go:164] Run: docker start no-preload-970975
	I1218 01:41:17.884856 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:17.905905 1542458 kic.go:430] container "no-preload-970975" state is running.
	I1218 01:41:17.906316 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:17.937083 1542458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:41:17.937308 1542458 machine.go:94] provisionDockerMachine start ...
	I1218 01:41:17.937366 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:17.956149 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:17.956499 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:17.956517 1542458 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:41:17.957070 1542458 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55092->127.0.0.1:34212: read: connection reset by peer
	I1218 01:41:21.112268 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:41:21.112295 1542458 ubuntu.go:182] provisioning hostname "no-preload-970975"
	I1218 01:41:21.112359 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.130603 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:21.130920 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:21.130938 1542458 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-970975 && echo "no-preload-970975" | sudo tee /etc/hostname
	I1218 01:41:21.297556 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:41:21.297646 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.320590 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:21.320958 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:21.320986 1542458 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970975/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:41:21.476955 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:41:21.476981 1542458 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:41:21.477006 1542458 ubuntu.go:190] setting up certificates
	I1218 01:41:21.477017 1542458 provision.go:84] configureAuth start
	I1218 01:41:21.477082 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:21.494228 1542458 provision.go:143] copyHostCerts
	I1218 01:41:21.494310 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:41:21.494324 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:41:21.494401 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:41:21.494522 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:41:21.494533 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:41:21.494569 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:41:21.494641 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:41:21.494660 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:41:21.494691 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:41:21.494755 1542458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.no-preload-970975 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970975]
	I1218 01:41:21.673721 1542458 provision.go:177] copyRemoteCerts
	I1218 01:41:21.673787 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:41:21.673828 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.691241 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:21.796420 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:41:21.814210 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:41:21.832654 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:41:21.850820 1542458 provision.go:87] duration metric: took 373.776889ms to configureAuth
	I1218 01:41:21.850846 1542458 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:41:21.851039 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:21.851046 1542458 machine.go:97] duration metric: took 3.913731319s to provisionDockerMachine
	I1218 01:41:21.851053 1542458 start.go:293] postStartSetup for "no-preload-970975" (driver="docker")
	I1218 01:41:21.851066 1542458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:41:21.851125 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:41:21.851174 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.867950 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:21.976450 1542458 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:41:21.979834 1542458 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:41:21.979870 1542458 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:41:21.979882 1542458 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:41:21.979967 1542458 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:41:21.980082 1542458 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:41:21.980195 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:41:21.987678 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:41:22.007779 1542458 start.go:296] duration metric: took 156.709262ms for postStartSetup
	I1218 01:41:22.007867 1542458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:41:22.007919 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.027575 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.133734 1542458 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:41:22.138514 1542458 fix.go:56] duration metric: took 4.544410806s for fixHost
	I1218 01:41:22.138549 1542458 start.go:83] releasing machines lock for "no-preload-970975", held for 4.544464704s
	I1218 01:41:22.138644 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:22.157798 1542458 ssh_runner.go:195] Run: cat /version.json
	I1218 01:41:22.157854 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.158122 1542458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:41:22.158189 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.181525 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.198466 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.397543 1542458 ssh_runner.go:195] Run: systemctl --version
	I1218 01:41:22.404123 1542458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:41:22.408396 1542458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:41:22.408478 1542458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:41:22.416316 1542458 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:41:22.416385 1542458 start.go:496] detecting cgroup driver to use...
	I1218 01:41:22.416431 1542458 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:41:22.416498 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:41:22.433783 1542458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:41:22.447542 1542458 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:41:22.447641 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:41:22.463765 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:41:22.477008 1542458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:41:22.587523 1542458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:41:22.731488 1542458 docker.go:234] disabling docker service ...
	I1218 01:41:22.731561 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:41:22.747388 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:41:22.761578 1542458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:41:22.877887 1542458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:41:23.031065 1542458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:41:23.045226 1542458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:41:23.061762 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:41:23.072968 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:41:23.082631 1542458 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:41:23.082726 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:41:23.091532 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:41:23.101058 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:41:23.110071 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:41:23.119106 1542458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:41:23.127834 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:41:23.137037 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:41:23.145854 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:41:23.155263 1542458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:41:23.162940 1542458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:41:23.170628 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:23.282537 1542458 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:41:23.387115 1542458 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:41:23.387237 1542458 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:41:23.391563 1542458 start.go:564] Will wait 60s for crictl version
	I1218 01:41:23.391643 1542458 ssh_runner.go:195] Run: which crictl
	I1218 01:41:23.395601 1542458 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:41:23.420820 1542458 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:41:23.420915 1542458 ssh_runner.go:195] Run: containerd --version
	I1218 01:41:23.441612 1542458 ssh_runner.go:195] Run: containerd --version
	I1218 01:41:23.470931 1542458 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:41:23.474060 1542458 cli_runner.go:164] Run: docker network inspect no-preload-970975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:41:23.491578 1542458 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1218 01:41:23.495808 1542458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:41:23.506072 1542458 kubeadm.go:884] updating cluster {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:41:23.506187 1542458 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:41:23.506254 1542458 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:41:23.531180 1542458 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:41:23.531204 1542458 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:41:23.531212 1542458 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:41:23.531314 1542458 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-970975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:41:23.531379 1542458 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:41:23.556615 1542458 cni.go:84] Creating CNI manager for ""
	I1218 01:41:23.556686 1542458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:41:23.556708 1542458 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:41:23.556730 1542458 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970975 NodeName:no-preload-970975 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:41:23.556849 1542458 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-970975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:41:23.556928 1542458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:41:23.564934 1542458 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:41:23.565015 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:41:23.572862 1542458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:41:23.585997 1542458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:41:23.599495 1542458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 01:41:23.614253 1542458 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:41:23.617922 1542458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:41:23.627614 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:23.769940 1542458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:41:23.786080 1542458 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975 for IP: 192.168.76.2
	I1218 01:41:23.786157 1542458 certs.go:195] generating shared ca certs ...
	I1218 01:41:23.786187 1542458 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:23.786374 1542458 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:41:23.786452 1542458 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:41:23.786479 1542458 certs.go:257] generating profile certs ...
	I1218 01:41:23.786915 1542458 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.key
	I1218 01:41:23.787042 1542458 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb
	I1218 01:41:23.787216 1542458 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key
	I1218 01:41:23.787372 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:41:23.787441 1542458 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:41:23.787473 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:41:23.787542 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:41:23.787589 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:41:23.787640 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:41:23.787726 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:41:23.788890 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:41:23.817320 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:41:23.835171 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:41:23.854360 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:41:23.874274 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:41:23.891844 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 01:41:23.909145 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:41:23.927246 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 01:41:23.945240 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:41:23.963173 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:41:23.980488 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:41:23.998141 1542458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:41:24.014660 1542458 ssh_runner.go:195] Run: openssl version
	I1218 01:41:24.021666 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.029705 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:41:24.037493 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.041469 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.041581 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.085117 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:41:24.092891 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.100861 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:41:24.108550 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.112664 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.112735 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.153886 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:41:24.161696 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.169404 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:41:24.177530 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.181402 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.181471 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.222746 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:41:24.230660 1542458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:41:24.234767 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:41:24.276020 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:41:24.322161 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:41:24.363215 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:41:24.405810 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:41:24.447504 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:41:24.489540 1542458 kubeadm.go:401] StartCluster: {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:24.489634 1542458 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:41:24.489710 1542458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:41:24.515412 1542458 cri.go:89] found id: ""
	I1218 01:41:24.515486 1542458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:41:24.523200 1542458 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:41:24.523218 1542458 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:41:24.523266 1542458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:41:24.530588 1542458 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:41:24.531015 1542458 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:24.531121 1542458 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-970975" cluster setting kubeconfig missing "no-preload-970975" context setting]
	I1218 01:41:24.531398 1542458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.532672 1542458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:41:24.540238 1542458 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1218 01:41:24.540316 1542458 kubeadm.go:602] duration metric: took 17.091472ms to restartPrimaryControlPlane
	I1218 01:41:24.540342 1542458 kubeadm.go:403] duration metric: took 50.814694ms to StartCluster
	I1218 01:41:24.540377 1542458 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.540439 1542458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:24.541093 1542458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.541305 1542458 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:41:24.541607 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:24.541651 1542458 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:41:24.541714 1542458 addons.go:70] Setting storage-provisioner=true in profile "no-preload-970975"
	I1218 01:41:24.541728 1542458 addons.go:239] Setting addon storage-provisioner=true in "no-preload-970975"
	I1218 01:41:24.541756 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.541767 1542458 addons.go:70] Setting dashboard=true in profile "no-preload-970975"
	I1218 01:41:24.541785 1542458 addons.go:239] Setting addon dashboard=true in "no-preload-970975"
	W1218 01:41:24.541792 1542458 addons.go:248] addon dashboard should already be in state true
	I1218 01:41:24.541815 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.542236 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.542251 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.545008 1542458 addons.go:70] Setting default-storageclass=true in profile "no-preload-970975"
	I1218 01:41:24.545648 1542458 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-970975"
	I1218 01:41:24.545997 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.546822 1542458 out.go:179] * Verifying Kubernetes components...
	I1218 01:41:24.552927 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:24.570156 1542458 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:41:24.573081 1542458 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:41:24.573110 1542458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:41:24.573184 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.592695 1542458 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:41:24.595365 1542458 addons.go:239] Setting addon default-storageclass=true in "no-preload-970975"
	I1218 01:41:24.595416 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.595944 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.600301 1542458 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:41:24.603288 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:41:24.603315 1542458 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:41:24.603380 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.629343 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.636778 1542458 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:24.636799 1542458 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:41:24.636864 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.658544 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.669350 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.789107 1542458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:41:24.835097 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:41:24.837668 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:41:24.837689 1542458 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:41:24.853236 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:41:24.853264 1542458 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:41:24.869445 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:24.897171 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:41:24.897197 1542458 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:41:24.938270 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:41:24.938297 1542458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:41:24.951622 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:41:24.951648 1542458 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:41:24.971216 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:41:24.971238 1542458 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:41:24.983819 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:41:24.983893 1542458 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:41:24.996816 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:41:24.996840 1542458 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:41:25.012660 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.012686 1542458 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:41:25.026609 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.394540 1542458 node_ready.go:35] waiting up to 6m0s for node "no-preload-970975" to be "Ready" ...
	W1218 01:41:25.394678 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395049 1542458 retry.go:31] will retry after 363.399962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.394729 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395067 1542458 retry.go:31] will retry after 247.961433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.394925 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395078 1542458 retry.go:31] will retry after 212.437007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.607792 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.643330 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:25.674866 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.674902 1542458 retry.go:31] will retry after 498.891168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.712162 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.712205 1542458 retry.go:31] will retry after 317.248393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.759542 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:25.819152 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.819190 1542458 retry.go:31] will retry after 494.070005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.030108 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:26.090657 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.090742 1542458 retry.go:31] will retry after 817.005428ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.174839 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:26.239145 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.239185 1542458 retry.go:31] will retry after 583.254902ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.314301 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:26.372805 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.372838 1542458 retry.go:31] will retry after 589.170119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.823020 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:26.882718 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.882755 1542458 retry.go:31] will retry after 886.612609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.908327 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:26.962817 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:26.979923 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.980023 1542458 retry.go:31] will retry after 562.729969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:27.024197 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.024231 1542458 retry.go:31] will retry after 1.217970865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:27.396236 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:27.543722 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:27.600982 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.601023 1542458 retry.go:31] will retry after 819.101552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.770394 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:27.830382 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.830419 1542458 retry.go:31] will retry after 1.67120434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.242456 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:28.302274 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.302318 1542458 retry.go:31] will retry after 1.635298762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.421000 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:28.487186 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.487222 1542458 retry.go:31] will retry after 1.446238744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:29.502431 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:29.561749 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:29.561785 1542458 retry.go:31] will retry after 2.842084958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:29.896301 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:29.934589 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:29.937978 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:30.014905 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:30.014994 1542458 retry.go:31] will retry after 3.020151942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:30.026594 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:30.026691 1542458 retry.go:31] will retry after 2.597509716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:32.395523 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:32.404827 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:32.465405 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.465451 1542458 retry.go:31] will retry after 2.786267996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.624505 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:32.701764 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.701805 1542458 retry.go:31] will retry after 1.750635941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:33.035842 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:33.099433 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:33.099469 1542458 retry.go:31] will retry after 2.666365739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:34.396276 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:34.452614 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:34.514417 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:34.514448 1542458 retry.go:31] will retry after 5.613247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.252571 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:35.317373 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.317406 1542458 retry.go:31] will retry after 2.675384889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.766334 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:35.831157 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.831192 1542458 retry.go:31] will retry after 7.35423349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:36.896400 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:37.993761 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:38.061649 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:38.061688 1542458 retry.go:31] will retry after 8.134260422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:39.396290 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:40.128917 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:40.209091 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:40.209125 1542458 retry.go:31] will retry after 4.385779308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:41.895504 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:43.185642 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:43.250764 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:43.250796 1542458 retry.go:31] will retry after 6.231358659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:44.395420 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:44.595764 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:44.664344 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:44.664380 1542458 retry.go:31] will retry after 11.847560445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:46.196558 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:46.269491 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:46.269526 1542458 retry.go:31] will retry after 5.581587619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:46.396021 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:41:48.895451 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:49.482739 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:49.541749 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:49.541784 1542458 retry.go:31] will retry after 8.073539424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:51.396344 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:51.852115 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:51.915137 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:51.915172 1542458 retry.go:31] will retry after 10.294162413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:53.896157 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:41:56.395497 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:56.512767 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:56.572427 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:56.572461 1542458 retry.go:31] will retry after 11.314950955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:57.615630 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:57.686813 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:57.686850 1542458 retry.go:31] will retry after 29.037122126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:58.395549 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:00.396394 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:02.209588 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:02.278784 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:02.278825 1542458 retry.go:31] will retry after 17.888279069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:02.895652 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:04.896306 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:07.396143 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:07.887683 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:07.967763 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:07.967796 1542458 retry.go:31] will retry after 14.642872465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:09.896073 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:12.396260 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:14.896042 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:16.896286 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:18.896459 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:20.168054 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:20.246791 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:20.246828 1542458 retry.go:31] will retry after 16.712663498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:21.395990 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:22.611852 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:22.673406 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:22.673445 1542458 retry.go:31] will retry after 21.192666201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:23.396132 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:25.895988 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:26.724599 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:42:26.782878 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:26.782912 1542458 retry.go:31] will retry after 21.608216211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:28.395363 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:30.396311 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:32.896262 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:35.395421 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:36.959868 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:37.028262 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:37.028401 1542458 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:42:37.396113 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:39.396234 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:41.396309 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:43.866395 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:43.896089 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:43.945124 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:43.945220 1542458 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:42:45.896258 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:48.392255 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:42:48.396036 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:48.465313 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:48.465411 1542458 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:42:48.469113 1542458 out.go:179] * Enabled addons: 
	I1218 01:42:48.471856 1542458 addons.go:530] duration metric: took 1m23.930193958s for enable addons: enabled=[]
	W1218 01:42:50.396402 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:52.896362 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:55.396228 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:57.896142 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:00.396105 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:02.896374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:05.396267 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:07.896366 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:10.396369 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:12.896401 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:15.396146 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:17.396361 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:19.896362 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:22.396171 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:24.895542 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:27.395379 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:29.396071 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:31.895472 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:34.395939 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:36.396095 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:38.396414 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:40.896036 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:43.395479 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:45.896432 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:48.396351 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:50.896295 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:53.396396 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:55.896168 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:58.396230 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:00.405834 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:02.896166 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:04.896371 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:06.896416 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:09.396303 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:11.896341 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:14.395475 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:16.896423 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:19.396185 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:21.396245 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:23.896170 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:26.396177 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:28.896337 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:31.396072 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:33.396254 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:35.396495 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:37.896137 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:39.896262 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:42.396086 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:44.895371 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:46.896074 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:48.896364 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:51.396336 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:53.895498 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:56.396404 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:58.896175 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:00.896234 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:03.395378 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:05.396146 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:07.396374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:09.396504 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:11.896374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:14.396063 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:16.396329 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:18.896121 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:20.896314 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:23.395969 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:25.895511 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:28.396315 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:30.896158 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:32.896205 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:35.396218 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:37.896429 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:40.395870 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:42.896279 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:45.396405 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:47.896110 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:49.896393 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:52.396300 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:54.895921 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:56.895990 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:59.396176 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:01.396286 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:03.895455 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:05.896216 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:08.396470 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:10.895416 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:13.395373 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:15.396312 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:17.896137 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:19.896465 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:22.396170 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:24.396288 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:26.896407 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:29.395467 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:31.395837 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:33.396384 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:35.896138 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:37.896242 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:40.396163 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:42.396439 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:44.895536 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:46.896442 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:49.396435 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:51.895882 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:53.895973 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:56.395483 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:58.396182 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:00.396374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:02.896391 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:05.396405 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:07.895435 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:09.896185 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:11.896309 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:14.395878 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:16.396300 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:18.896267 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:21.396109 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:23.895415 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:47:25.400717 1542458 node_ready.go:38] duration metric: took 6m0.00576723s for node "no-preload-970975" to be "Ready" ...
	I1218 01:47:25.403890 1542458 out.go:203] 
	W1218 01:47:25.406708 1542458 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 01:47:25.406730 1542458 out.go:285] * 
	* 
	W1218 01:47:25.413144 1542458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:47:25.416224 1542458 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1542592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:41:17.647711914Z",
	            "FinishedAt": "2025-12-18T01:41:16.31019941Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8868484521f3c95b5d3384207de825b735eca41ce409d5b6097489f36adbd1f",
	            "SandboxKey": "/var/run/docker/netns/a8868484521f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:c4:c7:ad:db:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "f645b66df5fb6b54a71529960c16fc0d0eda8d0c9be9273792de657fffcd9b75",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 2 (475.575864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	│ stop    │ -p no-preload-970975 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ addons  │ enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ start   │ -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-120615 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-120615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:47:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:47:25.355718 1550381 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:47:25.355915 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.355941 1550381 out.go:374] Setting ErrFile to fd 2...
	I1218 01:47:25.355960 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.356345 1550381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:47:25.356861 1550381 out.go:368] Setting JSON to false
	I1218 01:47:25.358213 1550381 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30592,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:47:25.358285 1550381 start.go:143] virtualization:  
	I1218 01:47:25.361184 1550381 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:47:25.364947 1550381 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:47:25.365006 1550381 notify.go:221] Checking for updates...
	I1218 01:47:25.370797 1550381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:47:25.373705 1550381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:25.376399 1550381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:47:25.379145 1550381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:47:25.381925 1550381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1218 01:47:23.895415 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:47:25.400717 1542458 node_ready.go:38] duration metric: took 6m0.00576723s for node "no-preload-970975" to be "Ready" ...
	I1218 01:47:25.403890 1542458 out.go:203] 
	W1218 01:47:25.406708 1542458 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 01:47:25.406730 1542458 out.go:285] * 
	W1218 01:47:25.413144 1542458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:47:25.416224 1542458 out.go:203] 
	I1218 01:47:25.385246 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:25.385825 1550381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:47:25.416975 1550381 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:47:25.417132 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.547941 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.531353346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.548100 1550381 docker.go:319] overlay module found
	I1218 01:47:25.551414 1550381 out.go:179] * Using the docker driver based on existing profile
	I1218 01:47:25.554261 1550381 start.go:309] selected driver: docker
	I1218 01:47:25.554288 1550381 start.go:927] validating driver "docker" against &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.554406 1550381 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:47:25.555118 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.640875 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.630200713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.641222 1550381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:47:25.641258 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:25.641307 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:25.641353 1550381 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.647668 1550381 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:47:25.650778 1550381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:47:25.654776 1550381 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:47:25.657861 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:25.657921 1550381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:47:25.657930 1550381 cache.go:65] Caching tarball of preloaded images
	I1218 01:47:25.658010 1550381 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:47:25.658022 1550381 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:47:25.658128 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:25.658345 1550381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:47:25.717764 1550381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:47:25.717789 1550381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:47:25.717804 1550381 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:47:25.717832 1550381 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:47:25.717885 1550381 start.go:364] duration metric: took 36.159µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:47:25.717905 1550381 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:47:25.717910 1550381 fix.go:54] fixHost starting: 
	I1218 01:47:25.718174 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:25.745308 1550381 fix.go:112] recreateIfNeeded on newest-cni-120615: state=Stopped err=<nil>
	W1218 01:47:25.745341 1550381 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343365892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343381514Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343418092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343433542Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343443264Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343454948Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343463957Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343476125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343492305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343522483Z" level=info msg="Connect containerd service"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343787182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.344338751Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359530690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359745094Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359671930Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.365773580Z" level=info msg="Start recovering state"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383747116Z" level=info msg="Start event monitor"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383803385Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383814093Z" level=info msg="Start streaming server"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383824997Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383833907Z" level=info msg="runtime interface starting up..."
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383841612Z" level=info msg="starting plugins..."
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383874005Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:41:23 no-preload-970975 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.385843444Z" level=info msg="containerd successfully booted in 0.065726s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:47:26.959759    3858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:26.960889    3858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:26.961673    3858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:26.963378    3858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:26.963687    3858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:47:27 up  8:29,  0 user,  load average: 0.39, 0.75, 1.46
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:47:23 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:24 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 18 01:47:24 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:24 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:24 no-preload-970975 kubelet[3737]: E1218 01:47:24.224190    3737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:24 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:24 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:24 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 18 01:47:24 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:24 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:24 no-preload-970975 kubelet[3742]: E1218 01:47:24.936177    3742 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:24 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:24 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:25 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 18 01:47:25 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:25 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:25 no-preload-970975 kubelet[3748]: E1218 01:47:25.718279    3748 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:25 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:25 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:26 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 18 01:47:26 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:26 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:26 no-preload-970975 kubelet[3770]: E1218 01:47:26.469386    3770 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:26 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:26 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 2 (343.043165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (80.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1218 01:46:11.681532 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:46:39.389912 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:46:43.269567 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m19.043137949s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-120615
helpers_test.go:244: (dbg) docker inspect newest-cni-120615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	        "Created": "2025-12-18T01:37:46.267734033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1536406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:37:46.322657241Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hosts",
	        "LogPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1-json.log",
	        "Name": "/newest-cni-120615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-120615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-120615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	                "LowerDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-120615",
	                "Source": "/var/lib/docker/volumes/newest-cni-120615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-120615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-120615",
	                "name.minikube.sigs.k8s.io": "newest-cni-120615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f76f018a6fd20ce57adf8edf73d97febe601a6c68392504c582065a9ed8fc45c",
	            "SandboxKey": "/var/run/docker/netns/f76f018a6fd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-120615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:cc:f5:06:cc:53",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3561ba231e6c48a625724c6039bb103aabf4482d7db78bad659da0b08d445469",
	                    "EndpointID": "a47896cd0687019046d2563e1820f4df3000f6f6a5fabac9bfc127e2ff82e230",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-120615",
	                        "dd9cd12a762d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615: exit status 6 (330.226579ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:47:22.538563 1549858 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ stop    │ -p embed-certs-922343 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:33 UTC │ 18 Dec 25 01:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:34 UTC │
	│ start   │ -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:34 UTC │ 18 Dec 25 01:35 UTC │
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	│ stop    │ -p no-preload-970975 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ addons  │ enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ start   │ -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:41:17
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:41:17.364681 1542458 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:41:17.364846 1542458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:41:17.364875 1542458 out.go:374] Setting ErrFile to fd 2...
	I1218 01:41:17.364894 1542458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:41:17.365168 1542458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:41:17.365597 1542458 out.go:368] Setting JSON to false
	I1218 01:41:17.366532 1542458 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30224,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:41:17.366626 1542458 start.go:143] virtualization:  
	I1218 01:41:17.369453 1542458 out.go:179] * [no-preload-970975] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:41:17.373146 1542458 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:41:17.373244 1542458 notify.go:221] Checking for updates...
	I1218 01:41:17.378986 1542458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:41:17.381940 1542458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:17.384732 1542458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:41:17.387579 1542458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:41:17.390446 1542458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:41:17.393789 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:17.394396 1542458 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:41:17.426513 1542458 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:41:17.426640 1542458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:41:17.488029 1542458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:41:17.478703453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:41:17.488135 1542458 docker.go:319] overlay module found
	I1218 01:41:17.491211 1542458 out.go:179] * Using the docker driver based on existing profile
	I1218 01:41:17.494107 1542458 start.go:309] selected driver: docker
	I1218 01:41:17.494124 1542458 start.go:927] validating driver "docker" against &{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:17.494227 1542458 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:41:17.494955 1542458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:41:17.562043 1542458 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:41:17.552976354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:41:17.562397 1542458 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 01:41:17.562433 1542458 cni.go:84] Creating CNI manager for ""
	I1218 01:41:17.562482 1542458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:41:17.562540 1542458 start.go:353] cluster config:
	{Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:17.565742 1542458 out.go:179] * Starting "no-preload-970975" primary control-plane node in "no-preload-970975" cluster
	I1218 01:41:17.568662 1542458 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:41:17.571552 1542458 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:41:17.574233 1542458 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:41:17.574310 1542458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:41:17.574357 1542458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:41:17.574663 1542458 cache.go:107] acquiring lock: {Name:mkbe76c9f71177ead8df5bdae626dba72c24e88a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574752 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1218 01:41:17.574760 1542458 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.281µs
	I1218 01:41:17.574771 1542458 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1218 01:41:17.574783 1542458 cache.go:107] acquiring lock: {Name:mk73deadf102b9ef2729ab344cb753d1e81c8e69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574814 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1218 01:41:17.574818 1542458 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 36.988µs
	I1218 01:41:17.574825 1542458 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574834 1542458 cache.go:107] acquiring lock: {Name:mk08959f4f9aec2f8cb7736193533393f169491b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574861 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1218 01:41:17.574866 1542458 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 32.787µs
	I1218 01:41:17.574871 1542458 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574881 1542458 cache.go:107] acquiring lock: {Name:mk51756ddbebcd3ad705096b7bac91c4370ab67f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574908 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1218 01:41:17.574913 1542458 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 32.615µs
	I1218 01:41:17.574918 1542458 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574927 1542458 cache.go:107] acquiring lock: {Name:mkf6c55bc605708b579c41afc97203c8d4e81ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.574954 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1218 01:41:17.574958 1542458 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 32.934µs
	I1218 01:41:17.574964 1542458 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1218 01:41:17.574972 1542458 cache.go:107] acquiring lock: {Name:mk1ebccb0216e63c057736909b9d1bea2501f43c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575000 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1218 01:41:17.575005 1542458 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 34.018µs
	I1218 01:41:17.575011 1542458 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1218 01:41:17.575028 1542458 cache.go:107] acquiring lock: {Name:mk273a40d27e5765473ae1c9ccf1347edbca61c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575052 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1218 01:41:17.575056 1542458 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 29.734µs
	I1218 01:41:17.575061 1542458 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1218 01:41:17.575071 1542458 cache.go:107] acquiring lock: {Name:mkb0d564e902314f0008f6dd25799cc8c98892bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.575096 1542458 cache.go:115] /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1218 01:41:17.575101 1542458 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.319µs
	I1218 01:41:17.575107 1542458 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1218 01:41:17.575113 1542458 cache.go:87] Successfully saved all images to host disk.
	I1218 01:41:17.593931 1542458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:41:17.593955 1542458 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:41:17.593976 1542458 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:41:17.594007 1542458 start.go:360] acquireMachinesLock for no-preload-970975: {Name:mkc5466bd6e57a370f52d05d09914f47211c2efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:41:17.594062 1542458 start.go:364] duration metric: took 35.782µs to acquireMachinesLock for "no-preload-970975"
	I1218 01:41:17.594089 1542458 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:41:17.594095 1542458 fix.go:54] fixHost starting: 
	I1218 01:41:17.594362 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:17.612849 1542458 fix.go:112] recreateIfNeeded on no-preload-970975: state=Stopped err=<nil>
	W1218 01:41:17.612890 1542458 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:41:17.616118 1542458 out.go:252] * Restarting existing docker container for "no-preload-970975" ...
	I1218 01:41:17.616203 1542458 cli_runner.go:164] Run: docker start no-preload-970975
	I1218 01:41:17.884856 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:17.905905 1542458 kic.go:430] container "no-preload-970975" state is running.
	I1218 01:41:17.906316 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:17.937083 1542458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/config.json ...
	I1218 01:41:17.937308 1542458 machine.go:94] provisionDockerMachine start ...
	I1218 01:41:17.937366 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:17.956149 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:17.956499 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:17.956517 1542458 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:41:17.957070 1542458 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55092->127.0.0.1:34212: read: connection reset by peer
	I1218 01:41:21.112268 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:41:21.112295 1542458 ubuntu.go:182] provisioning hostname "no-preload-970975"
	I1218 01:41:21.112359 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.130603 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:21.130920 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:21.130938 1542458 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-970975 && echo "no-preload-970975" | sudo tee /etc/hostname
	I1218 01:41:21.297556 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-970975
	
	I1218 01:41:21.297646 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.320590 1542458 main.go:143] libmachine: Using SSH client type: native
	I1218 01:41:21.320958 1542458 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I1218 01:41:21.320986 1542458 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-970975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-970975/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-970975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:41:21.476955 1542458 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:41:21.476981 1542458 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:41:21.477006 1542458 ubuntu.go:190] setting up certificates
	I1218 01:41:21.477017 1542458 provision.go:84] configureAuth start
	I1218 01:41:21.477082 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:21.494228 1542458 provision.go:143] copyHostCerts
	I1218 01:41:21.494310 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:41:21.494324 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:41:21.494401 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:41:21.494522 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:41:21.494533 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:41:21.494569 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:41:21.494641 1542458 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:41:21.494660 1542458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:41:21.494691 1542458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:41:21.494755 1542458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.no-preload-970975 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-970975]
	I1218 01:41:21.673721 1542458 provision.go:177] copyRemoteCerts
	I1218 01:41:21.673787 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:41:21.673828 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.691241 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:21.796420 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:41:21.814210 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:41:21.832654 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:41:21.850820 1542458 provision.go:87] duration metric: took 373.776889ms to configureAuth
	I1218 01:41:21.850846 1542458 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:41:21.851039 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:21.851046 1542458 machine.go:97] duration metric: took 3.913731319s to provisionDockerMachine
	I1218 01:41:21.851053 1542458 start.go:293] postStartSetup for "no-preload-970975" (driver="docker")
	I1218 01:41:21.851066 1542458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:41:21.851125 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:41:21.851174 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:21.867950 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:21.976450 1542458 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:41:21.979834 1542458 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:41:21.979870 1542458 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:41:21.979882 1542458 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:41:21.979967 1542458 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:41:21.980082 1542458 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:41:21.980195 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:41:21.987678 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:41:22.007779 1542458 start.go:296] duration metric: took 156.709262ms for postStartSetup
	I1218 01:41:22.007867 1542458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:41:22.007919 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.027575 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.133734 1542458 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:41:22.138514 1542458 fix.go:56] duration metric: took 4.544410806s for fixHost
	I1218 01:41:22.138549 1542458 start.go:83] releasing machines lock for "no-preload-970975", held for 4.544464704s
	I1218 01:41:22.138644 1542458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-970975
	I1218 01:41:22.157798 1542458 ssh_runner.go:195] Run: cat /version.json
	I1218 01:41:22.157854 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.158122 1542458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:41:22.158189 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:22.181525 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.198466 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:22.397543 1542458 ssh_runner.go:195] Run: systemctl --version
	I1218 01:41:22.404123 1542458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:41:22.408396 1542458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:41:22.408478 1542458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:41:22.416316 1542458 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:41:22.416385 1542458 start.go:496] detecting cgroup driver to use...
	I1218 01:41:22.416431 1542458 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:41:22.416498 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:41:22.433783 1542458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:41:22.447542 1542458 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:41:22.447641 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:41:22.463765 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:41:22.477008 1542458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:41:22.587523 1542458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:41:22.731488 1542458 docker.go:234] disabling docker service ...
	I1218 01:41:22.731561 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:41:22.747388 1542458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:41:22.761578 1542458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:41:22.877887 1542458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:41:23.031065 1542458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:41:23.045226 1542458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:41:23.061762 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:41:23.072968 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:41:23.082631 1542458 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:41:23.082726 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:41:23.091532 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:41:23.101058 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:41:23.110071 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:41:23.119106 1542458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:41:23.127834 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:41:23.137037 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:41:23.145854 1542458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:41:23.155263 1542458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:41:23.162940 1542458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:41:23.170628 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:23.282537 1542458 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:41:23.387115 1542458 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:41:23.387237 1542458 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:41:23.391563 1542458 start.go:564] Will wait 60s for crictl version
	I1218 01:41:23.391643 1542458 ssh_runner.go:195] Run: which crictl
	I1218 01:41:23.395601 1542458 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:41:23.420820 1542458 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:41:23.420915 1542458 ssh_runner.go:195] Run: containerd --version
	I1218 01:41:23.441612 1542458 ssh_runner.go:195] Run: containerd --version
	I1218 01:41:23.470931 1542458 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:41:23.474060 1542458 cli_runner.go:164] Run: docker network inspect no-preload-970975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:41:23.491578 1542458 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1218 01:41:23.495808 1542458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:41:23.506072 1542458 kubeadm.go:884] updating cluster {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:41:23.506187 1542458 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:41:23.506254 1542458 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:41:23.531180 1542458 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:41:23.531204 1542458 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:41:23.531212 1542458 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:41:23.531314 1542458 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-970975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:41:23.531379 1542458 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:41:23.556615 1542458 cni.go:84] Creating CNI manager for ""
	I1218 01:41:23.556686 1542458 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:41:23.556708 1542458 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:41:23.556730 1542458 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-970975 NodeName:no-preload-970975 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:41:23.556849 1542458 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-970975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:41:23.556928 1542458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:41:23.564934 1542458 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:41:23.565015 1542458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:41:23.572862 1542458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:41:23.585997 1542458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:41:23.599495 1542458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1218 01:41:23.614253 1542458 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:41:23.617922 1542458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:41:23.627614 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:23.769940 1542458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:41:23.786080 1542458 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975 for IP: 192.168.76.2
	I1218 01:41:23.786157 1542458 certs.go:195] generating shared ca certs ...
	I1218 01:41:23.786187 1542458 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:23.786374 1542458 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:41:23.786452 1542458 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:41:23.786479 1542458 certs.go:257] generating profile certs ...
	I1218 01:41:23.786915 1542458 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.key
	I1218 01:41:23.787042 1542458 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key.4df284eb
	I1218 01:41:23.787216 1542458 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key
	I1218 01:41:23.787372 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:41:23.787441 1542458 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:41:23.787473 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:41:23.787542 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:41:23.787589 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:41:23.787640 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:41:23.787726 1542458 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:41:23.788890 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:41:23.817320 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:41:23.835171 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:41:23.854360 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:41:23.874274 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:41:23.891844 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 01:41:23.909145 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:41:23.927246 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 01:41:23.945240 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:41:23.963173 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:41:23.980488 1542458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:41:23.998141 1542458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:41:24.014660 1542458 ssh_runner.go:195] Run: openssl version
	I1218 01:41:24.021666 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.029705 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:41:24.037493 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.041469 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.041581 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:41:24.085117 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:41:24.092891 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.100861 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:41:24.108550 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.112664 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.112735 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:41:24.153886 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:41:24.161696 1542458 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.169404 1542458 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:41:24.177530 1542458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.181402 1542458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.181471 1542458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:41:24.222746 1542458 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:41:24.230660 1542458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:41:24.234767 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:41:24.276020 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:41:24.322161 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:41:24.363215 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:41:24.405810 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:41:24.447504 1542458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:41:24.489540 1542458 kubeadm.go:401] StartCluster: {Name:no-preload-970975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-970975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:41:24.489634 1542458 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:41:24.489710 1542458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:41:24.515412 1542458 cri.go:89] found id: ""
	I1218 01:41:24.515486 1542458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:41:24.523200 1542458 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:41:24.523218 1542458 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:41:24.523266 1542458 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:41:24.530588 1542458 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:41:24.531015 1542458 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-970975" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:24.531121 1542458 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-970975" cluster setting kubeconfig missing "no-preload-970975" context setting]
	I1218 01:41:24.531398 1542458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.532672 1542458 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:41:24.540238 1542458 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1218 01:41:24.540316 1542458 kubeadm.go:602] duration metric: took 17.091472ms to restartPrimaryControlPlane
	I1218 01:41:24.540342 1542458 kubeadm.go:403] duration metric: took 50.814694ms to StartCluster
	I1218 01:41:24.540377 1542458 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.540439 1542458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:41:24.541093 1542458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:41:24.541305 1542458 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:41:24.541607 1542458 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:41:24.541651 1542458 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:41:24.541714 1542458 addons.go:70] Setting storage-provisioner=true in profile "no-preload-970975"
	I1218 01:41:24.541728 1542458 addons.go:239] Setting addon storage-provisioner=true in "no-preload-970975"
	I1218 01:41:24.541756 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.541767 1542458 addons.go:70] Setting dashboard=true in profile "no-preload-970975"
	I1218 01:41:24.541785 1542458 addons.go:239] Setting addon dashboard=true in "no-preload-970975"
	W1218 01:41:24.541792 1542458 addons.go:248] addon dashboard should already be in state true
	I1218 01:41:24.541815 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.542236 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.542251 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.545008 1542458 addons.go:70] Setting default-storageclass=true in profile "no-preload-970975"
	I1218 01:41:24.545648 1542458 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-970975"
	I1218 01:41:24.545997 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.546822 1542458 out.go:179] * Verifying Kubernetes components...
	I1218 01:41:24.552927 1542458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:41:24.570156 1542458 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:41:24.573081 1542458 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:41:24.573110 1542458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:41:24.573184 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.592695 1542458 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:41:24.595365 1542458 addons.go:239] Setting addon default-storageclass=true in "no-preload-970975"
	I1218 01:41:24.595416 1542458 host.go:66] Checking if "no-preload-970975" exists ...
	I1218 01:41:24.595944 1542458 cli_runner.go:164] Run: docker container inspect no-preload-970975 --format={{.State.Status}}
	I1218 01:41:24.600301 1542458 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:41:24.603288 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:41:24.603315 1542458 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:41:24.603380 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.629343 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.636778 1542458 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:24.636799 1542458 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:41:24.636864 1542458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-970975
	I1218 01:41:24.658544 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.669350 1542458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/no-preload-970975/id_rsa Username:docker}
	I1218 01:41:24.789107 1542458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:41:24.835097 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:41:24.837668 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:41:24.837689 1542458 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:41:24.853236 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:41:24.853264 1542458 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:41:24.869445 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:24.897171 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:41:24.897197 1542458 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:41:24.938270 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:41:24.938297 1542458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:41:24.951622 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:41:24.951648 1542458 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:41:24.971216 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:41:24.971238 1542458 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:41:24.983819 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:41:24.983893 1542458 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:41:24.996816 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:41:24.996840 1542458 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:41:25.012660 1542458 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.012686 1542458 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:41:25.026609 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.394540 1542458 node_ready.go:35] waiting up to 6m0s for node "no-preload-970975" to be "Ready" ...
	W1218 01:41:25.394678 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395049 1542458 retry.go:31] will retry after 363.399962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.394729 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395067 1542458 retry.go:31] will retry after 247.961433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.394925 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.395078 1542458 retry.go:31] will retry after 212.437007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.607792 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:41:25.643330 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:25.674866 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.674902 1542458 retry.go:31] will retry after 498.891168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:25.712162 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.712205 1542458 retry.go:31] will retry after 317.248393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.759542 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:25.819152 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:25.819190 1542458 retry.go:31] will retry after 494.070005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.030108 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:26.090657 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.090742 1542458 retry.go:31] will retry after 817.005428ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.174839 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:26.239145 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.239185 1542458 retry.go:31] will retry after 583.254902ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.314301 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:26.372805 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.372838 1542458 retry.go:31] will retry after 589.170119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.823020 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:26.882718 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.882755 1542458 retry.go:31] will retry after 886.612609ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.908327 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:26.962817 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:26.979923 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:26.980023 1542458 retry.go:31] will retry after 562.729969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:27.024197 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.024231 1542458 retry.go:31] will retry after 1.217970865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:27.396236 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:27.543722 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:27.600982 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.601023 1542458 retry.go:31] will retry after 819.101552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.770394 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:27.830382 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:27.830419 1542458 retry.go:31] will retry after 1.67120434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.242456 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:28.302274 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.302318 1542458 retry.go:31] will retry after 1.635298762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.421000 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:28.487186 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:28.487222 1542458 retry.go:31] will retry after 1.446238744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:29.502431 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:29.561749 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:29.561785 1542458 retry.go:31] will retry after 2.842084958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:29.896301 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:29.934589 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:41:29.937978 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:30.014905 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:30.014994 1542458 retry.go:31] will retry after 3.020151942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:30.026594 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:30.026691 1542458 retry.go:31] will retry after 2.597509716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:32.395523 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:32.404827 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:32.465405 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.465451 1542458 retry.go:31] will retry after 2.786267996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.624505 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:32.701764 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:32.701805 1542458 retry.go:31] will retry after 1.750635941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:33.035842 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:33.099433 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:33.099469 1542458 retry.go:31] will retry after 2.666365739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:34.396276 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:34.452614 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:34.514417 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:34.514448 1542458 retry.go:31] will retry after 5.613247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.252571 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:35.317373 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.317406 1542458 retry.go:31] will retry after 2.675384889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.766334 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:35.831157 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:35.831192 1542458 retry.go:31] will retry after 7.35423349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:36.896400 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:37.993761 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:38.061649 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:38.061688 1542458 retry.go:31] will retry after 8.134260422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:39.396290 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:40.128917 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:40.209091 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:40.209125 1542458 retry.go:31] will retry after 4.385779308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:41.895504 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:43.185642 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:43.250764 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:43.250796 1542458 retry.go:31] will retry after 6.231358659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:44.395420 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:44.595764 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:44.664344 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:44.664380 1542458 retry.go:31] will retry after 11.847560445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:46.196558 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:46.269491 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:46.269526 1542458 retry.go:31] will retry after 5.581587619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:46.396021 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:41:48.895451 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:49.482739 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:49.541749 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:49.541784 1542458 retry.go:31] will retry after 8.073539424s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:51.396344 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:51.852115 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:41:51.915137 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:51.915172 1542458 retry.go:31] will retry after 10.294162413s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:53.896157 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:41:56.395497 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:41:56.512767 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:41:56.572427 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:56.572461 1542458 retry.go:31] will retry after 11.314950955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:58.901889 1535974 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000959518s
	I1218 01:41:58.901915 1535974 kubeadm.go:319] 
	I1218 01:41:58.901973 1535974 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:41:58.902006 1535974 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:41:58.902111 1535974 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:41:58.902115 1535974 kubeadm.go:319] 
	I1218 01:41:58.902219 1535974 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:41:58.902251 1535974 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:41:58.902283 1535974 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:41:58.902287 1535974 kubeadm.go:319] 
	I1218 01:41:58.909121 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:41:58.909533 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:41:58.909635 1535974 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:41:58.909878 1535974 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1218 01:41:58.909884 1535974 kubeadm.go:319] 
	I1218 01:41:58.909948 1535974 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1218 01:41:58.910051 1535974 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-120615] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000959518s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1218 01:41:58.910129 1535974 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1218 01:41:59.328841 1535974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:41:59.342624 1535974 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:41:59.342738 1535974 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:41:59.351529 1535974 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:41:59.351551 1535974 kubeadm.go:158] found existing configuration files:
	
	I1218 01:41:59.351607 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:41:59.359598 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:41:59.359688 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:41:59.367501 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:41:59.375582 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:41:59.375649 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:41:59.383413 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:41:59.391374 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:41:59.391444 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:41:59.399981 1535974 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:41:59.407991 1535974 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:41:59.408054 1535974 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:41:59.415368 1535974 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:41:59.457909 1535974 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1218 01:41:59.458215 1535974 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:41:59.537330 1535974 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:41:59.537416 1535974 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:41:59.537453 1535974 kubeadm.go:319] OS: Linux
	I1218 01:41:59.537500 1535974 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:41:59.537551 1535974 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:41:59.537599 1535974 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:41:59.537649 1535974 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:41:59.537698 1535974 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:41:59.537753 1535974 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:41:59.537800 1535974 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:41:59.537850 1535974 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:41:59.537895 1535974 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:41:59.601143 1535974 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:41:59.601259 1535974 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:41:59.601369 1535974 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:41:59.609176 1535974 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:41:59.612708 1535974 out.go:252]   - Generating certificates and keys ...
	I1218 01:41:59.612866 1535974 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:41:59.612946 1535974 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:41:59.613032 1535974 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 01:41:59.613110 1535974 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1218 01:41:59.613200 1535974 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1218 01:41:59.613293 1535974 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1218 01:41:59.613424 1535974 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1218 01:41:59.613519 1535974 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1218 01:41:59.613611 1535974 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 01:41:59.613738 1535974 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 01:41:59.613808 1535974 kubeadm.go:319] [certs] Using the existing "sa" key
	I1218 01:41:59.613893 1535974 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:41:59.965901 1535974 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:42:00.273593 1535974 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:42:00.517614 1535974 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:42:00.754315 1535974 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:42:00.831013 1535974 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:42:00.831849 1535974 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:42:00.834692 1535974 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:42:00.838062 1535974 out.go:252]   - Booting up control plane ...
	I1218 01:42:00.838173 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:42:00.838258 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:42:00.838866 1535974 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:42:00.861421 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:42:00.861532 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:42:00.869206 1535974 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:42:00.869621 1535974 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:42:00.869690 1535974 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:42:01.017070 1535974 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:42:01.017185 1535974 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:41:57.615630 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:41:57.686813 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:41:57.686850 1542458 retry.go:31] will retry after 29.037122126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:41:58.395549 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:00.396394 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:02.209588 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:02.278784 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:02.278825 1542458 retry.go:31] will retry after 17.888279069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:02.895652 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:04.896306 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:07.396143 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:07.887683 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:07.967763 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:07.967796 1542458 retry.go:31] will retry after 14.642872465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:09.896073 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:12.396260 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:14.896042 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:16.896286 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:18.896459 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:20.168054 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:20.246791 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:20.246828 1542458 retry.go:31] will retry after 16.712663498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:21.395990 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:22.611852 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:22.673406 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:22.673445 1542458 retry.go:31] will retry after 21.192666201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:23.396132 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:25.895988 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:26.724599 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:42:26.782878 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:42:26.782912 1542458 retry.go:31] will retry after 21.608216211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:28.395363 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:30.396311 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:32.896262 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:35.395421 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:36.959868 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:42:37.028262 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:37.028401 1542458 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:42:37.396113 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:39.396234 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:41.396309 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:43.866395 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:42:43.896089 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:43.945124 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:43.945220 1542458 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:42:45.896258 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:42:48.392255 1542458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:42:48.396036 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:48.465313 1542458 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:42:48.465411 1542458 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:42:48.469113 1542458 out.go:179] * Enabled addons: 
	I1218 01:42:48.471856 1542458 addons.go:530] duration metric: took 1m23.930193958s for enable addons: enabled=[]
	W1218 01:42:50.396402 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:52.896362 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:55.396228 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:42:57.896142 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:00.396105 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:02.896374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:05.396267 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:07.896366 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:10.396369 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:12.896401 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:15.396146 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:17.396361 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:19.896362 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:22.396171 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:24.895542 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:27.395379 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:29.396071 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:31.895472 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:34.395939 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:36.396095 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:38.396414 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:40.896036 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:43.395479 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:45.896432 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:48.396351 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:50.896295 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:53.396396 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:55.896168 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:43:58.396230 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:00.405834 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:02.896166 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:04.896371 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:06.896416 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:09.396303 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:11.896341 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:14.395475 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:16.896423 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:19.396185 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:21.396245 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:23.896170 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:26.396177 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:28.896337 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:31.396072 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:33.396254 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:35.396495 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:37.896137 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:39.896262 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:42.396086 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:44.895371 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:46.896074 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:48.896364 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:51.396336 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:53.895498 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:56.396404 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:44:58.896175 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:00.896234 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:03.395378 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:05.396146 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:07.396374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:09.396504 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:11.896374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:14.396063 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:16.396329 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:18.896121 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:20.896314 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:23.395969 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:25.895511 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:28.396315 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:30.896158 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:32.896205 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:35.396218 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:37.896429 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:40.395870 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:42.896279 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:45.396405 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:47.896110 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:49.896393 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:52.396300 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:54.895921 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:45:56.895990 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:46:01.012416 1535974 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000248647s
	I1218 01:46:01.012441 1535974 kubeadm.go:319] 
	I1218 01:46:01.012495 1535974 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1218 01:46:01.012527 1535974 kubeadm.go:319] 	- The kubelet is not running
	I1218 01:46:01.012642 1535974 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1218 01:46:01.012648 1535974 kubeadm.go:319] 
	I1218 01:46:01.012746 1535974 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1218 01:46:01.012776 1535974 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1218 01:46:01.012805 1535974 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1218 01:46:01.012808 1535974 kubeadm.go:319] 
	I1218 01:46:01.017099 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:46:01.017529 1535974 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1218 01:46:01.017640 1535974 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:46:01.017873 1535974 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1218 01:46:01.017879 1535974 kubeadm.go:319] 
	I1218 01:46:01.017947 1535974 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1218 01:46:01.017993 1535974 kubeadm.go:403] duration metric: took 8m7.229192197s to StartCluster
	I1218 01:46:01.018027 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:46:01.018087 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:46:01.042559 1535974 cri.go:89] found id: ""
	I1218 01:46:01.042584 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.042593 1535974 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:46:01.042599 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:46:01.042663 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:46:01.070638 1535974 cri.go:89] found id: ""
	I1218 01:46:01.070661 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.070670 1535974 logs.go:284] No container was found matching "etcd"
	I1218 01:46:01.070675 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:46:01.070733 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:46:01.095625 1535974 cri.go:89] found id: ""
	I1218 01:46:01.095652 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.095661 1535974 logs.go:284] No container was found matching "coredns"
	I1218 01:46:01.095667 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:46:01.095726 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:46:01.123024 1535974 cri.go:89] found id: ""
	I1218 01:46:01.123049 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.123058 1535974 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:46:01.123066 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:46:01.123127 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:46:01.149205 1535974 cri.go:89] found id: ""
	I1218 01:46:01.149273 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.149283 1535974 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:46:01.149291 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:46:01.149370 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:46:01.175919 1535974 cri.go:89] found id: ""
	I1218 01:46:01.175947 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.175957 1535974 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:46:01.175985 1535974 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:46:01.176067 1535974 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:46:01.203077 1535974 cri.go:89] found id: ""
	I1218 01:46:01.203101 1535974 logs.go:282] 0 containers: []
	W1218 01:46:01.203110 1535974 logs.go:284] No container was found matching "kindnet"
	I1218 01:46:01.203121 1535974 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:46:01.203133 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:46:01.267505 1535974 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:46:01.258672    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.259298    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261036    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261647    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.263429    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:46:01.258672    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.259298    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261036    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.261647    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:46:01.263429    4797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:46:01.267525 1535974 logs.go:123] Gathering logs for containerd ...
	I1218 01:46:01.267538 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:46:01.305435 1535974 logs.go:123] Gathering logs for container status ...
	I1218 01:46:01.305473 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:46:01.335002 1535974 logs.go:123] Gathering logs for kubelet ...
	I1218 01:46:01.335029 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:46:01.392317 1535974 logs.go:123] Gathering logs for dmesg ...
	I1218 01:46:01.392351 1535974 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1218 01:46:01.412420 1535974 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1218 01:46:01.412472 1535974 out.go:285] * 
	W1218 01:46:01.412527 1535974 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:46:01.412543 1535974 out.go:285] * 
	W1218 01:46:01.414976 1535974 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:46:01.421623 1535974 out.go:203] 
	W1218 01:46:01.425533 1535974 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000248647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1218 01:46:01.425601 1535974 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1218 01:46:01.425624 1535974 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1218 01:46:01.428730 1535974 out.go:203] 
	W1218 01:45:59.396176 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:01.396286 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:03.895455 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:05.896216 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:08.396470 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:10.895416 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:13.395373 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:15.396312 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:17.896137 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:19.896465 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:22.396170 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:24.396288 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:26.896407 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:29.395467 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:31.395837 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:33.396384 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:35.896138 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:37.896242 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:40.396163 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:42.396439 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:44.895536 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:46.896442 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:49.396435 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:51.895882 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:53.895973 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:56.395483 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:46:58.396182 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:00.396374 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:02.896391 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:05.396405 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:07.895435 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:09.896185 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:11.896309 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:14.395878 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:16.396300 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:18.896267 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	W1218 01:47:21.396109 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393026324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393099233Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393195370Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393270003Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393341599Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393405606Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393477645Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393542177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393629223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.393734886Z" level=info msg="Connect containerd service"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.394100211Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.394756958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.408838556Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.408942858Z" level=info msg="Start recovering state"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.408840895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.409318529Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448011988Z" level=info msg="Start event monitor"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448063680Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448073148Z" level=info msg="Start streaming server"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448083273Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448092323Z" level=info msg="runtime interface starting up..."
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448098198Z" level=info msg="starting plugins..."
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448110538Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:37:52 newest-cni-120615 containerd[754]: time="2025-12-18T01:37:52.448240225Z" level=info msg="containerd successfully booted in 0.081802s"
	Dec 18 01:37:52 newest-cni-120615 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:47:23.245875    5772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:23.246781    5772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:23.248289    5772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:23.248801    5772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:47:23.250693    5772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:47:23 up  8:29,  0 user,  load average: 0.25, 0.73, 1.46
	Linux newest-cni-120615 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:47:20 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:20 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 426.
	Dec 18 01:47:20 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:20 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:20 newest-cni-120615 kubelet[5649]: E1218 01:47:20.941883    5649 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:20 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:20 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:21 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 427.
	Dec 18 01:47:21 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:21 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:21 newest-cni-120615 kubelet[5655]: E1218 01:47:21.690570    5655 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:21 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:21 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:22 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 428.
	Dec 18 01:47:22 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:22 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:22 newest-cni-120615 kubelet[5668]: E1218 01:47:22.444514    5668 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:22 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:22 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:47:23 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 429.
	Dec 18 01:47:23 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:23 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:47:23 newest-cni-120615 kubelet[5760]: E1218 01:47:23.199793    5760 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:47:23 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:47:23 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 6 (340.303996ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:47:23.801399 1550087 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-120615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (80.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (375.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m10.415451306s)

                                                
                                                
-- stdout --
	* [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	* Pulling base image v0.0.48-1765966054-22186 ...
	* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:47:25.355718 1550381 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:47:25.355915 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.355941 1550381 out.go:374] Setting ErrFile to fd 2...
	I1218 01:47:25.355960 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.356345 1550381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:47:25.356861 1550381 out.go:368] Setting JSON to false
	I1218 01:47:25.358213 1550381 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30592,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:47:25.358285 1550381 start.go:143] virtualization:  
	I1218 01:47:25.361184 1550381 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:47:25.364947 1550381 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:47:25.365006 1550381 notify.go:221] Checking for updates...
	I1218 01:47:25.370797 1550381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:47:25.373705 1550381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:25.376399 1550381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:47:25.379145 1550381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:47:25.381925 1550381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:47:25.385246 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:25.385825 1550381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:47:25.416975 1550381 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:47:25.417132 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.547941 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.531353346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.548100 1550381 docker.go:319] overlay module found
	I1218 01:47:25.551414 1550381 out.go:179] * Using the docker driver based on existing profile
	I1218 01:47:25.554261 1550381 start.go:309] selected driver: docker
	I1218 01:47:25.554288 1550381 start.go:927] validating driver "docker" against &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.554406 1550381 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:47:25.555118 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.640875 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.630200713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.641222 1550381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:47:25.641258 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:25.641307 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:25.641353 1550381 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.647668 1550381 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:47:25.650778 1550381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:47:25.654776 1550381 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:47:25.657861 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:25.657921 1550381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:47:25.657930 1550381 cache.go:65] Caching tarball of preloaded images
	I1218 01:47:25.658010 1550381 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:47:25.658022 1550381 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:47:25.658128 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:25.658345 1550381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:47:25.717764 1550381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:47:25.717789 1550381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:47:25.717804 1550381 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:47:25.717832 1550381 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:47:25.717885 1550381 start.go:364] duration metric: took 36.159µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:47:25.717905 1550381 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:47:25.717910 1550381 fix.go:54] fixHost starting: 
	I1218 01:47:25.718174 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:25.745308 1550381 fix.go:112] recreateIfNeeded on newest-cni-120615: state=Stopped err=<nil>
	W1218 01:47:25.745341 1550381 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:47:25.748580 1550381 out.go:252] * Restarting existing docker container for "newest-cni-120615" ...
	I1218 01:47:25.748689 1550381 cli_runner.go:164] Run: docker start newest-cni-120615
	I1218 01:47:26.093744 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:26.142570 1550381 kic.go:430] container "newest-cni-120615" state is running.
	I1218 01:47:26.143025 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:26.185359 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:26.185574 1550381 machine.go:94] provisionDockerMachine start ...
	I1218 01:47:26.185645 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:26.213286 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:26.213626 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:26.213647 1550381 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:47:26.214251 1550381 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51806->127.0.0.1:34217: read: connection reset by peer
	I1218 01:47:29.372266 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.372355 1550381 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:47:29.372452 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.391771 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.392072 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.392083 1550381 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:47:29.561538 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.561625 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.579579 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.579890 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.579907 1550381 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:47:29.737159 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:47:29.737184 1550381 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:47:29.737219 1550381 ubuntu.go:190] setting up certificates
	I1218 01:47:29.737230 1550381 provision.go:84] configureAuth start
	I1218 01:47:29.737295 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:29.756140 1550381 provision.go:143] copyHostCerts
	I1218 01:47:29.756217 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:47:29.756227 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:47:29.756310 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:47:29.756403 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:47:29.756408 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:47:29.756436 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:47:29.756487 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:47:29.756491 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:47:29.756514 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:47:29.756559 1550381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:47:30.464419 1550381 provision.go:177] copyRemoteCerts
	I1218 01:47:30.464487 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:47:30.464527 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.482395 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.589769 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:47:30.608046 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:47:30.627105 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:47:30.645433 1550381 provision.go:87] duration metric: took 908.179647ms to configureAuth
	I1218 01:47:30.645503 1550381 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:47:30.645738 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:30.645753 1550381 machine.go:97] duration metric: took 4.460171667s to provisionDockerMachine
	I1218 01:47:30.645761 1550381 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:47:30.645773 1550381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:47:30.645828 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:47:30.645876 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.663527 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.774279 1550381 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:47:30.777807 1550381 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:47:30.777838 1550381 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:47:30.777851 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:47:30.777919 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:47:30.778044 1550381 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:47:30.778177 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:47:30.786077 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:30.804331 1550381 start.go:296] duration metric: took 158.553882ms for postStartSetup
	I1218 01:47:30.804411 1550381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:47:30.804450 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.822410 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.925924 1550381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:47:30.931214 1550381 fix.go:56] duration metric: took 5.213296131s for fixHost
	I1218 01:47:30.931236 1550381 start.go:83] releasing machines lock for "newest-cni-120615", held for 5.213342998s
	I1218 01:47:30.931301 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:30.952534 1550381 ssh_runner.go:195] Run: cat /version.json
	I1218 01:47:30.952560 1550381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:47:30.952584 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.952698 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.969636 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.973480 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:31.167774 1550381 ssh_runner.go:195] Run: systemctl --version
	I1218 01:47:31.174874 1550381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:47:31.179507 1550381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:47:31.179587 1550381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:47:31.187709 1550381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:47:31.187739 1550381 start.go:496] detecting cgroup driver to use...
	I1218 01:47:31.187790 1550381 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:47:31.187842 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:47:31.205437 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:47:31.218917 1550381 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:47:31.218989 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:47:31.234859 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:47:31.247863 1550381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:47:31.361666 1550381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:47:31.478401 1550381 docker.go:234] disabling docker service ...
	I1218 01:47:31.478516 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:47:31.493181 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:47:31.506484 1550381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:47:31.622932 1550381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:47:31.755398 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:47:31.768148 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:47:31.786320 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:47:31.795518 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:47:31.804506 1550381 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:47:31.804591 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:47:31.814205 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.823037 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:47:31.832187 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.841421 1550381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:47:31.849663 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:47:31.858543 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:47:31.867324 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:47:31.878120 1550381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:47:31.886565 1550381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:47:31.894226 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.000205 1550381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:47:32.119373 1550381 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:47:32.119494 1550381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:47:32.123705 1550381 start.go:564] Will wait 60s for crictl version
	I1218 01:47:32.123796 1550381 ssh_runner.go:195] Run: which crictl
	I1218 01:47:32.127736 1550381 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:47:32.151646 1550381 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:47:32.151742 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.171630 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.197786 1550381 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:47:32.200756 1550381 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:47:32.216905 1550381 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:47:32.220989 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.234255 1550381 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:47:32.237186 1550381 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:47:32.237352 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:32.237431 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.266567 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.266592 1550381 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:47:32.266653 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.290056 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.290080 1550381 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:47:32.290087 1550381 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:47:32.290202 1550381 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:47:32.290272 1550381 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:47:32.317281 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:32.317305 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:32.317328 1550381 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:47:32.317382 1550381 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:47:32.317534 1550381 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:47:32.317611 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:47:32.325240 1550381 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:47:32.325360 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:47:32.332953 1550381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:47:32.345753 1550381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:47:32.358201 1550381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:47:32.371135 1550381 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:47:32.374910 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.385004 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.524322 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:32.543517 1550381 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:47:32.543581 1550381 certs.go:195] generating shared ca certs ...
	I1218 01:47:32.543620 1550381 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:32.543768 1550381 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:47:32.543847 1550381 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:47:32.543878 1550381 certs.go:257] generating profile certs ...
	I1218 01:47:32.544012 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:47:32.544110 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:47:32.544194 1550381 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:47:32.544363 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:47:32.544429 1550381 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:47:32.544454 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:47:32.544506 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:47:32.544561 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:47:32.544639 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:47:32.544713 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:32.545379 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:47:32.570494 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:47:32.589292 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:47:32.607511 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:47:32.630085 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:47:32.648120 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:47:32.665293 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:47:32.683115 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:47:32.701108 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:47:32.719384 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:47:32.737332 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:47:32.755228 1550381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:47:32.768547 1550381 ssh_runner.go:195] Run: openssl version
	I1218 01:47:32.775214 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.783201 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:47:32.791100 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794909 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794975 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.836868 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:47:32.844649 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.852089 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:47:32.859827 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863774 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863845 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.904999 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:47:32.912518 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.919928 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:47:32.927254 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.930966 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.931034 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.972378 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:47:32.979895 1550381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:47:32.983509 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:47:33.024763 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:47:33.066928 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:47:33.108240 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:47:33.150820 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:47:33.193721 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:47:33.236344 1550381 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:33.236435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:47:33.236534 1550381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:47:33.262713 1550381 cri.go:89] found id: ""
	I1218 01:47:33.262784 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:47:33.270865 1550381 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:47:33.270885 1550381 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:47:33.270962 1550381 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:47:33.278569 1550381 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:47:33.279133 1550381 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.279389 1550381 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-120615" cluster setting kubeconfig missing "newest-cni-120615" context setting]
	I1218 01:47:33.279869 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.281782 1550381 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:47:33.289414 1550381 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1218 01:47:33.289446 1550381 kubeadm.go:602] duration metric: took 18.555667ms to restartPrimaryControlPlane
	I1218 01:47:33.289461 1550381 kubeadm.go:403] duration metric: took 53.123465ms to StartCluster
	I1218 01:47:33.289476 1550381 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.289537 1550381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.290381 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.290591 1550381 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:47:33.290894 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:33.290942 1550381 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:47:33.291049 1550381 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-120615"
	I1218 01:47:33.291069 1550381 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-120615"
	I1218 01:47:33.291087 1550381 addons.go:70] Setting dashboard=true in profile "newest-cni-120615"
	I1218 01:47:33.291142 1550381 addons.go:239] Setting addon dashboard=true in "newest-cni-120615"
	W1218 01:47:33.291166 1550381 addons.go:248] addon dashboard should already be in state true
	I1218 01:47:33.291217 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291092 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291788 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291956 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291099 1550381 addons.go:70] Setting default-storageclass=true in profile "newest-cni-120615"
	I1218 01:47:33.292357 1550381 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-120615"
	I1218 01:47:33.292683 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.296441 1550381 out.go:179] * Verifying Kubernetes components...
	I1218 01:47:33.299325 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:33.332793 1550381 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:47:33.338698 1550381 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.338720 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:47:33.338786 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.346302 1550381 addons.go:239] Setting addon default-storageclass=true in "newest-cni-120615"
	I1218 01:47:33.346350 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.346767 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.347220 1550381 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:47:33.357584 1550381 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:47:33.364736 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:47:33.364766 1550381 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:47:33.364841 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.384388 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.388779 1550381 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.388806 1550381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:47:33.388870 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.420777 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.424445 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.506937 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:33.590614 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.623167 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.644036 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:47:33.644058 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:47:33.686194 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:47:33.686219 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:47:33.699257 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:47:33.699284 1550381 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:47:33.712575 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:47:33.712598 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:47:33.726008 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:47:33.726036 1550381 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:47:33.739578 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:47:33.739601 1550381 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:47:33.752283 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:47:33.752306 1550381 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:47:33.765197 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:47:33.765228 1550381 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:47:33.778397 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:33.778463 1550381 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:47:33.791499 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:34.144394 1550381 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:47:34.144937 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:34.144564 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145084 1550381 retry.go:31] will retry after 226.399987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144607 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145242 1550381 retry.go:31] will retry after 194.583533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144818 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145308 1550381 retry.go:31] will retry after 316.325527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.341084 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:34.371646 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:34.416769 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.416804 1550381 retry.go:31] will retry after 482.49716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.445473 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.445504 1550381 retry.go:31] will retry after 401.349435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.462702 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:34.529683 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.529767 1550381 retry.go:31] will retry after 466.9672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:34.847135 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:34.899725 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:34.915787 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.915821 1550381 retry.go:31] will retry after 680.448009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.980399 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.980428 1550381 retry.go:31] will retry after 371.155762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.997728 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:35.075146 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.075188 1550381 retry.go:31] will retry after 528.393444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.145511 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:35.352321 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:35.422768 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.422808 1550381 retry.go:31] will retry after 703.678182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.597254 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:35.604769 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:35.645316 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:35.700025 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.700065 1550381 retry.go:31] will retry after 524.167729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:35.720166 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.720199 1550381 retry.go:31] will retry after 843.445988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.127505 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:36.145942 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:36.218437 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.218469 1550381 retry.go:31] will retry after 1.4365249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.224772 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:36.288029 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.288065 1550381 retry.go:31] will retry after 1.092662167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.564433 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:36.628283 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.628318 1550381 retry.go:31] will retry after 821.063441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.645614 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.145021 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.381704 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:37.442129 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.442163 1550381 retry.go:31] will retry after 1.066797005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.450315 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:37.513152 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.513188 1550381 retry.go:31] will retry after 2.094232702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.645565 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.656033 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:37.728287 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.728341 1550381 retry.go:31] will retry after 2.192570718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.145856 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:38.509851 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:38.574127 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.574163 1550381 retry.go:31] will retry after 2.056176901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.645562 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.145843 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.608414 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:39.645902 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:39.677401 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.677446 1550381 retry.go:31] will retry after 2.219986296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.921684 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:39.986039 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.986071 1550381 retry.go:31] will retry after 1.874712757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.145336 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:40.630985 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:40.645468 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:40.721503 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.721589 1550381 retry.go:31] will retry after 5.659633915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.145050 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.861275 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:41.897736 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:41.919445 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.919480 1550381 retry.go:31] will retry after 5.257989291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:41.968013 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.968047 1550381 retry.go:31] will retry after 2.407225539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:42.145507 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:42.645709 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.145827 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.645206 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.145140 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.375521 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:44.445301 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.445333 1550381 retry.go:31] will retry after 6.049252935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.145091 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.646076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.145377 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.381920 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:46.446240 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.446272 1550381 retry.go:31] will retry after 6.470588043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.645629 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.145934 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.178013 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:47.241089 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.241122 1550381 retry.go:31] will retry after 8.808880621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.645680 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.145730 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.646057 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.145645 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.646010 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.145037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.495265 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:50.557628 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.557662 1550381 retry.go:31] will retry after 5.398438748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.645968 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.145305 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.645106 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.145818 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.645593 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.917095 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:53.016010 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.016044 1550381 retry.go:31] will retry after 7.672661981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.145281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:53.645853 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.145129 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.645151 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.145097 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.645490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.957008 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:56.023826 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.023863 1550381 retry.go:31] will retry after 8.13600998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.050917 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:56.116243 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.116276 1550381 retry.go:31] will retry after 5.600895051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.145475 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:56.645854 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.145640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.645927 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.145109 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.645621 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.145858 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.645893 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.145118 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.645093 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.689724 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:00.750450 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:00.750485 1550381 retry.go:31] will retry after 19.327903144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.145862 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.645460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.717566 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:01.782999 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.783030 1550381 retry.go:31] will retry after 18.603092159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:02.145671 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:02.645087 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.145743 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.645040 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.145864 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.161047 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:04.272335 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.272373 1550381 retry.go:31] will retry after 12.170847168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.645651 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.145079 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.645793 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.145198 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.145836 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.645773 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.145131 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.645630 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.145136 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.645143 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.145076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.645910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.146089 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.145142 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.645270 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.145485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.645137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.145724 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.645837 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.146110 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.645847 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.145895 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.444141 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:16.505161 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.505200 1550381 retry.go:31] will retry after 25.656674631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.645612 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.145123 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.645762 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.145134 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.145081 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.645152 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.079482 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:20.141746 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.141779 1550381 retry.go:31] will retry after 22.047786735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.145903 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.387205 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:20.452144 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.452188 1550381 retry.go:31] will retry after 24.810473247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.645470 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.146015 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.645174 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.145273 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.645128 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.145100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.145139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.646075 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.145371 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.645387 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.145943 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.645074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.145918 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.645060 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.145641 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.645873 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.146022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.145074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.645956 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.145849 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.645447 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.145809 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.645085 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.146067 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.645142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:33.645253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:33.669719 1550381 cri.go:89] found id: ""
	I1218 01:48:33.669745 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.669754 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:33.669760 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:33.669817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:33.695127 1550381 cri.go:89] found id: ""
	I1218 01:48:33.695150 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.695159 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:33.695164 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:33.695253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:33.719637 1550381 cri.go:89] found id: ""
	I1218 01:48:33.719659 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.719668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:33.719674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:33.719778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:33.746705 1550381 cri.go:89] found id: ""
	I1218 01:48:33.746731 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.746740 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:33.746746 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:33.746805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:33.774595 1550381 cri.go:89] found id: ""
	I1218 01:48:33.774620 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.774631 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:33.774638 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:33.774696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:33.802090 1550381 cri.go:89] found id: ""
	I1218 01:48:33.802115 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.802123 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:33.802130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:33.802187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:33.827047 1550381 cri.go:89] found id: ""
	I1218 01:48:33.827084 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.827094 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:33.827100 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:33.827172 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:33.855186 1550381 cri.go:89] found id: ""
	I1218 01:48:33.855213 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.855222 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:33.855230 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:33.855241 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:33.910490 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:33.910527 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:33.925321 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:33.925361 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:33.990602 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:33.990624 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:33.990636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:34.016861 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:34.016901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:36.546620 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:36.557304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:36.557390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:36.582868 1550381 cri.go:89] found id: ""
	I1218 01:48:36.582891 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.582900 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:36.582906 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:36.582964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:36.608045 1550381 cri.go:89] found id: ""
	I1218 01:48:36.608067 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.608075 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:36.608081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:36.608137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:36.633385 1550381 cri.go:89] found id: ""
	I1218 01:48:36.633408 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.633417 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:36.633423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:36.633482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:36.657140 1550381 cri.go:89] found id: ""
	I1218 01:48:36.657165 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.657175 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:36.657187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:36.657254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:36.686651 1550381 cri.go:89] found id: ""
	I1218 01:48:36.686673 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.686683 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:36.686689 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:36.686753 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:36.712049 1550381 cri.go:89] found id: ""
	I1218 01:48:36.712073 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.712082 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:36.712089 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:36.712146 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:36.736327 1550381 cri.go:89] found id: ""
	I1218 01:48:36.736355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.736369 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:36.736375 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:36.736432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:36.763059 1550381 cri.go:89] found id: ""
	I1218 01:48:36.763085 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.763094 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:36.763104 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:36.763115 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:36.818060 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:36.818095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:36.833161 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:36.833198 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:36.900981 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:36.901005 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:36.901018 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:36.926395 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:36.926435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:39.461526 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:39.472938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:39.473011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:39.499282 1550381 cri.go:89] found id: ""
	I1218 01:48:39.499309 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.499317 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:39.499324 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:39.499387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:39.524947 1550381 cri.go:89] found id: ""
	I1218 01:48:39.524983 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.524992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:39.524998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:39.525108 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:39.549919 1550381 cri.go:89] found id: ""
	I1218 01:48:39.549944 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.549953 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:39.549959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:39.550021 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:39.574351 1550381 cri.go:89] found id: ""
	I1218 01:48:39.574376 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.574391 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:39.574398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:39.574456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:39.598033 1550381 cri.go:89] found id: ""
	I1218 01:48:39.598054 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.598063 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:39.598069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:39.598133 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:39.626910 1550381 cri.go:89] found id: ""
	I1218 01:48:39.626932 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.626940 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:39.626946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:39.627002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:39.655231 1550381 cri.go:89] found id: ""
	I1218 01:48:39.655302 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.655326 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:39.655346 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:39.655426 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:39.684000 1550381 cri.go:89] found id: ""
	I1218 01:48:39.684079 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.684106 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:39.684129 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:39.684170 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:39.739075 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:39.739109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:39.753861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:39.753890 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:39.817313 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:39.817335 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:39.817347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:39.842685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:39.842727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:42.162239 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:48:42.190324 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:42.249384 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:48:42.249527 1550381 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:48:42.279196 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.279234 1550381 retry.go:31] will retry after 35.148907823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.371473 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:42.382637 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:42.382711 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:42.428461 1550381 cri.go:89] found id: ""
	I1218 01:48:42.428490 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.428499 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:42.428505 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:42.428565 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:42.464484 1550381 cri.go:89] found id: ""
	I1218 01:48:42.464511 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.464520 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:42.464526 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:42.464600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:42.501574 1550381 cri.go:89] found id: ""
	I1218 01:48:42.501644 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.501668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:42.501682 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:42.501756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:42.529255 1550381 cri.go:89] found id: ""
	I1218 01:48:42.529283 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.529292 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:42.529299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:42.529357 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:42.563020 1550381 cri.go:89] found id: ""
	I1218 01:48:42.563093 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.563130 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:42.563153 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:42.563240 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:42.589599 1550381 cri.go:89] found id: ""
	I1218 01:48:42.589672 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.589689 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:42.589697 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:42.589756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:42.620478 1550381 cri.go:89] found id: ""
	I1218 01:48:42.620500 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.620509 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:42.620515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:42.620600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:42.647535 1550381 cri.go:89] found id: ""
	I1218 01:48:42.647560 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.647574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:42.647583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:42.647594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:42.705328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:42.705366 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:42.720602 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:42.720653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:42.791434 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:42.791460 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:42.791474 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:42.816821 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:42.816855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:45.263722 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:48:45.345805 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:48:45.349241 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.349279 1550381 retry.go:31] will retry after 26.611542555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.357893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:45.358009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:45.383950 1550381 cri.go:89] found id: ""
	I1218 01:48:45.383977 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.383986 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:45.383993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:45.384055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:45.429969 1550381 cri.go:89] found id: ""
	I1218 01:48:45.429995 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.430004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:45.430010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:45.430071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:45.493689 1550381 cri.go:89] found id: ""
	I1218 01:48:45.493720 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.493730 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:45.493736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:45.493830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:45.520332 1550381 cri.go:89] found id: ""
	I1218 01:48:45.520355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.520363 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:45.520369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:45.520425 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:45.547181 1550381 cri.go:89] found id: ""
	I1218 01:48:45.547245 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.547270 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:45.547289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:45.547366 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:45.572686 1550381 cri.go:89] found id: ""
	I1218 01:48:45.572754 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.572780 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:45.572804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:45.572879 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:45.596710 1550381 cri.go:89] found id: ""
	I1218 01:48:45.596734 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.596743 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:45.596749 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:45.596809 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:45.622285 1550381 cri.go:89] found id: ""
	I1218 01:48:45.622316 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.622325 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:45.622335 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:45.622345 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:45.680819 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:45.680854 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:45.695825 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:45.695856 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:45.758598 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:45.758621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:45.758634 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:45.783476 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:45.783513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:48.311112 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:48.321845 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:48.321917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:48.347239 1550381 cri.go:89] found id: ""
	I1218 01:48:48.347260 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.347269 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:48.347276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:48.347352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:48.372522 1550381 cri.go:89] found id: ""
	I1218 01:48:48.372548 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.372557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:48.372564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:48.372641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:48.419361 1550381 cri.go:89] found id: ""
	I1218 01:48:48.419385 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.419402 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:48.419409 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:48.419476 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:48.468755 1550381 cri.go:89] found id: ""
	I1218 01:48:48.468780 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.468789 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:48.468795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:48.468865 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:48.499951 1550381 cri.go:89] found id: ""
	I1218 01:48:48.499978 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.499987 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:48.499993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:48.500066 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:48.525758 1550381 cri.go:89] found id: ""
	I1218 01:48:48.525784 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.525793 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:48.525799 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:48.525867 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:48.554959 1550381 cri.go:89] found id: ""
	I1218 01:48:48.554982 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.554991 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:48.554999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:48.555073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:48.579603 1550381 cri.go:89] found id: ""
	I1218 01:48:48.579627 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.579636 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:48.579646 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:48.579682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:48.638239 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:48.638284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:48.652698 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:48.652747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:48.719758 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:48.719781 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:48.719796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:48.744911 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:48.744946 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:51.273570 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:51.283902 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:51.283973 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:51.308033 1550381 cri.go:89] found id: ""
	I1218 01:48:51.308057 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.308065 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:51.308072 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:51.308135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:51.335581 1550381 cri.go:89] found id: ""
	I1218 01:48:51.335604 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.335612 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:51.335618 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:51.335676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:51.364109 1550381 cri.go:89] found id: ""
	I1218 01:48:51.364135 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.364144 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:51.364150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:51.364208 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:51.401663 1550381 cri.go:89] found id: ""
	I1218 01:48:51.401689 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.401698 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:51.401704 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:51.401764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:51.436653 1550381 cri.go:89] found id: ""
	I1218 01:48:51.436679 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.436688 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:51.436696 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:51.436755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:51.484873 1550381 cri.go:89] found id: ""
	I1218 01:48:51.484900 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.484908 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:51.484915 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:51.484972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:51.512364 1550381 cri.go:89] found id: ""
	I1218 01:48:51.512389 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.512398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:51.512404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:51.512463 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:51.536334 1550381 cri.go:89] found id: ""
	I1218 01:48:51.536359 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.536368 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:51.536378 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:51.536389 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:51.590814 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:51.590847 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:51.605410 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:51.605438 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:51.679184 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:51.679247 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:51.679267 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:51.704862 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:51.704898 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:54.232571 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:54.243250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:54.243318 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:54.268694 1550381 cri.go:89] found id: ""
	I1218 01:48:54.268762 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.268776 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:54.268783 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:54.268861 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:54.294766 1550381 cri.go:89] found id: ""
	I1218 01:48:54.294789 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.294798 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:54.294811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:54.294872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:54.319370 1550381 cri.go:89] found id: ""
	I1218 01:48:54.319396 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.319405 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:54.319411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:54.319470 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:54.344762 1550381 cri.go:89] found id: ""
	I1218 01:48:54.344805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.344815 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:54.344839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:54.344928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:54.376778 1550381 cri.go:89] found id: ""
	I1218 01:48:54.376805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.376823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:54.376830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:54.376948 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:54.435510 1550381 cri.go:89] found id: ""
	I1218 01:48:54.435589 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.435620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:54.435641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:54.435763 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:54.481350 1550381 cri.go:89] found id: ""
	I1218 01:48:54.481428 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.481456 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:54.481476 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:54.481621 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:54.520301 1550381 cri.go:89] found id: ""
	I1218 01:48:54.520377 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.520399 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:54.520420 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:54.520457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:54.578993 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:54.579045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:54.595845 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:54.595876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:54.661543 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:54.661566 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:54.661578 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:54.687751 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:54.687803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.222271 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:57.232723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:57.232795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:57.260837 1550381 cri.go:89] found id: ""
	I1218 01:48:57.260858 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.260866 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:57.260872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:57.260928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:57.286122 1550381 cri.go:89] found id: ""
	I1218 01:48:57.286148 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.286156 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:57.286163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:57.286220 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:57.310908 1550381 cri.go:89] found id: ""
	I1218 01:48:57.310930 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.310939 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:57.310945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:57.311005 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:57.336552 1550381 cri.go:89] found id: ""
	I1218 01:48:57.336573 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.336583 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:57.336589 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:57.336681 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:57.363069 1550381 cri.go:89] found id: ""
	I1218 01:48:57.363098 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.363106 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:57.363113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:57.363175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:57.387453 1550381 cri.go:89] found id: ""
	I1218 01:48:57.387483 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.387492 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:57.387499 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:57.387556 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:57.455540 1550381 cri.go:89] found id: ""
	I1218 01:48:57.455567 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.455576 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:57.455583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:57.455641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:57.487729 1550381 cri.go:89] found id: ""
	I1218 01:48:57.487751 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.487759 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:57.487773 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:57.487783 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:57.513517 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:57.513555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.541522 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:57.541591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:57.599250 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:57.599285 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:57.614575 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:57.614612 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:57.685065 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.185435 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:00.217821 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:00.217993 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:00.272675 1550381 cri.go:89] found id: ""
	I1218 01:49:00.272752 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.272781 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:00.272803 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:00.272911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:00.308098 1550381 cri.go:89] found id: ""
	I1218 01:49:00.308130 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.308140 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:00.308148 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:00.308229 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:00.342048 1550381 cri.go:89] found id: ""
	I1218 01:49:00.342083 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.342093 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:00.342102 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:00.342176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:00.373793 1550381 cri.go:89] found id: ""
	I1218 01:49:00.373867 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.373893 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:00.373912 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:00.374032 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:00.453457 1550381 cri.go:89] found id: ""
	I1218 01:49:00.453540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.453562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:00.453580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:00.453674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:00.497069 1550381 cri.go:89] found id: ""
	I1218 01:49:00.497139 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.497165 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:00.497229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:00.497320 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:00.523805 1550381 cri.go:89] found id: ""
	I1218 01:49:00.523883 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.523907 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:00.523925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:00.523998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:00.550245 1550381 cri.go:89] found id: ""
	I1218 01:49:00.550315 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.550338 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:00.550356 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:00.550368 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:00.606138 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:00.606171 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:00.621471 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:00.621501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:00.687608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.687630 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:00.687645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:00.713254 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:00.713288 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:03.251500 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:03.263863 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:03.263937 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:03.292341 1550381 cri.go:89] found id: ""
	I1218 01:49:03.292363 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.292372 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:03.292379 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:03.292444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:03.318593 1550381 cri.go:89] found id: ""
	I1218 01:49:03.318618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.318627 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:03.318633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:03.318713 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:03.342954 1550381 cri.go:89] found id: ""
	I1218 01:49:03.342976 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.342984 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:03.342990 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:03.343056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:03.369216 1550381 cri.go:89] found id: ""
	I1218 01:49:03.369240 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.369255 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:03.369262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:03.369321 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:03.418160 1550381 cri.go:89] found id: ""
	I1218 01:49:03.418196 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.418208 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:03.418234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:03.418314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:03.468056 1550381 cri.go:89] found id: ""
	I1218 01:49:03.468090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.468100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:03.468107 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:03.468177 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:03.493930 1550381 cri.go:89] found id: ""
	I1218 01:49:03.493954 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.493964 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:03.493970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:03.494028 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:03.522766 1550381 cri.go:89] found id: ""
	I1218 01:49:03.522799 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.522808 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:03.522817 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:03.522845 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:03.579881 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:03.579922 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:03.595497 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:03.595533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:03.664750 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:03.664774 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:03.664789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:03.690066 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:03.690102 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:06.220404 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:06.230940 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:06.231013 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:06.258449 1550381 cri.go:89] found id: ""
	I1218 01:49:06.258493 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.258501 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:06.258511 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:06.258570 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:06.284944 1550381 cri.go:89] found id: ""
	I1218 01:49:06.284967 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.284975 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:06.284981 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:06.285038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:06.310888 1550381 cri.go:89] found id: ""
	I1218 01:49:06.310914 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.310923 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:06.310929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:06.310992 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:06.336281 1550381 cri.go:89] found id: ""
	I1218 01:49:06.336306 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.336316 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:06.336322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:06.336384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:06.361424 1550381 cri.go:89] found id: ""
	I1218 01:49:06.361489 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.361507 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:06.361515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:06.361581 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:06.386353 1550381 cri.go:89] found id: ""
	I1218 01:49:06.386381 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.386390 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:06.386396 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:06.386458 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:06.420497 1550381 cri.go:89] found id: ""
	I1218 01:49:06.420523 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.420533 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:06.420540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:06.420599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:06.477983 1550381 cri.go:89] found id: ""
	I1218 01:49:06.478008 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.478017 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:06.478033 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:06.478045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:06.542941 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:06.542988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:06.557943 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:06.557971 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:06.638974 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:06.638996 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:06.639008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:06.665193 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:06.665231 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.197687 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:09.208321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:09.208432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:09.233962 1550381 cri.go:89] found id: ""
	I1218 01:49:09.233985 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.233993 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:09.234000 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:09.234061 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:09.262673 1550381 cri.go:89] found id: ""
	I1218 01:49:09.262697 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.262706 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:09.262712 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:09.262773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:09.289951 1550381 cri.go:89] found id: ""
	I1218 01:49:09.289973 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.289982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:09.289988 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:09.290053 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:09.314541 1550381 cri.go:89] found id: ""
	I1218 01:49:09.314570 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.314578 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:09.314585 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:09.314650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:09.343459 1550381 cri.go:89] found id: ""
	I1218 01:49:09.343484 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.343493 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:09.343500 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:09.343563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:09.376389 1550381 cri.go:89] found id: ""
	I1218 01:49:09.376413 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.376422 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:09.376429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:09.376488 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:09.436490 1550381 cri.go:89] found id: ""
	I1218 01:49:09.436567 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.436591 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:09.436611 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:09.436730 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:09.486769 1550381 cri.go:89] found id: ""
	I1218 01:49:09.486798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.486807 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:09.486817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:09.486827 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:09.512058 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:09.512099 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.540109 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:09.540137 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:09.595196 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:09.595233 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:09.610057 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:09.610088 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:09.676821 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:11.961101 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:49:12.022946 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:12.023052 1550381 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:12.177224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:12.188868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:12.188946 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:12.214139 1550381 cri.go:89] found id: ""
	I1218 01:49:12.214162 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.214171 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:12.214178 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:12.214264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:12.242355 1550381 cri.go:89] found id: ""
	I1218 01:49:12.242380 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.242389 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:12.242395 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:12.242483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:12.266515 1550381 cri.go:89] found id: ""
	I1218 01:49:12.266540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.266548 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:12.266555 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:12.266613 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:12.290463 1550381 cri.go:89] found id: ""
	I1218 01:49:12.290529 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.290545 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:12.290553 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:12.290618 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:12.318223 1550381 cri.go:89] found id: ""
	I1218 01:49:12.318247 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.318256 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:12.318262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:12.318337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:12.342197 1550381 cri.go:89] found id: ""
	I1218 01:49:12.342222 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.342231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:12.342238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:12.342302 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:12.370588 1550381 cri.go:89] found id: ""
	I1218 01:49:12.370611 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.370620 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:12.370626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:12.370688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:12.418224 1550381 cri.go:89] found id: ""
	I1218 01:49:12.418249 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.418258 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:12.418268 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:12.418279 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:12.523068 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:12.523095 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:12.523108 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:12.549040 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:12.549076 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:12.577176 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:12.577201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:12.631665 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:12.631703 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.147547 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:15.158736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:15.158812 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:15.184772 1550381 cri.go:89] found id: ""
	I1218 01:49:15.184838 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.184862 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:15.184881 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:15.184962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:15.210609 1550381 cri.go:89] found id: ""
	I1218 01:49:15.210632 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.210641 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:15.210648 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:15.210712 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:15.238686 1550381 cri.go:89] found id: ""
	I1218 01:49:15.238722 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.238734 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:15.238741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:15.238815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:15.264618 1550381 cri.go:89] found id: ""
	I1218 01:49:15.264675 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.264684 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:15.264692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:15.264757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:15.295205 1550381 cri.go:89] found id: ""
	I1218 01:49:15.295229 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.295244 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:15.295250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:15.295319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:15.320375 1550381 cri.go:89] found id: ""
	I1218 01:49:15.320398 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.320406 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:15.320412 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:15.320472 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:15.345880 1550381 cri.go:89] found id: ""
	I1218 01:49:15.345912 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.345921 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:15.345928 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:15.345989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:15.371477 1550381 cri.go:89] found id: ""
	I1218 01:49:15.371499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.371508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:15.371518 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:15.371530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:15.432289 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:15.432325 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:15.513081 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:15.513118 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.528085 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:15.528163 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:15.589922 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:15.589943 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:15.589955 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:17.429823 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:49:17.494063 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:17.494186 1550381 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:17.497997 1550381 out.go:179] * Enabled addons: 
	I1218 01:49:17.500791 1550381 addons.go:530] duration metric: took 1m44.209848117s for enable addons: enabled=[]
	I1218 01:49:18.115485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:18.126625 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:18.126750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:18.152997 1550381 cri.go:89] found id: ""
	I1218 01:49:18.153031 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.153041 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:18.153048 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:18.153114 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:18.184726 1550381 cri.go:89] found id: ""
	I1218 01:49:18.184748 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.184757 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:18.184764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:18.184833 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:18.213873 1550381 cri.go:89] found id: ""
	I1218 01:49:18.213945 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.213971 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:18.213991 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:18.214081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:18.243010 1550381 cri.go:89] found id: ""
	I1218 01:49:18.243086 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.243109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:18.243128 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:18.243218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:18.267052 1550381 cri.go:89] found id: ""
	I1218 01:49:18.267117 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.267142 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:18.267158 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:18.267246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:18.291939 1550381 cri.go:89] found id: ""
	I1218 01:49:18.292002 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.292026 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:18.292045 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:18.292129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:18.318195 1550381 cri.go:89] found id: ""
	I1218 01:49:18.318219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.318233 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:18.318240 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:18.318299 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:18.346276 1550381 cri.go:89] found id: ""
	I1218 01:49:18.346310 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.346319 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:18.346329 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:18.346341 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:18.407199 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:18.407257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:18.440997 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:18.441077 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:18.537719 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:18.537789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:18.537810 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:18.563514 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:18.563550 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:21.091361 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:21.102189 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:21.102289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:21.130931 1550381 cri.go:89] found id: ""
	I1218 01:49:21.130958 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.130967 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:21.130974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:21.131033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:21.155877 1550381 cri.go:89] found id: ""
	I1218 01:49:21.155951 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.155984 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:21.156004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:21.156088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:21.180785 1550381 cri.go:89] found id: ""
	I1218 01:49:21.180809 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.180818 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:21.180824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:21.180908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:21.206344 1550381 cri.go:89] found id: ""
	I1218 01:49:21.206366 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.206375 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:21.206381 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:21.206441 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:21.230752 1550381 cri.go:89] found id: ""
	I1218 01:49:21.230775 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.230783 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:21.230789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:21.230846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:21.255317 1550381 cri.go:89] found id: ""
	I1218 01:49:21.255391 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.255416 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:21.255436 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:21.255520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:21.284319 1550381 cri.go:89] found id: ""
	I1218 01:49:21.284345 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.284355 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:21.284361 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:21.284420 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:21.313090 1550381 cri.go:89] found id: ""
	I1218 01:49:21.313116 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.313124 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:21.313133 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:21.313143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:21.367961 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:21.367997 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:21.382941 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:21.382972 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:21.496229 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:21.496249 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:21.496261 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:21.526182 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:21.526216 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:24.057294 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:24.070220 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:24.070292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:24.104394 1550381 cri.go:89] found id: ""
	I1218 01:49:24.104419 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.104428 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:24.104434 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:24.104495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:24.129335 1550381 cri.go:89] found id: ""
	I1218 01:49:24.129358 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.129366 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:24.129371 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:24.129429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:24.153339 1550381 cri.go:89] found id: ""
	I1218 01:49:24.153361 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.153370 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:24.153376 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:24.153439 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:24.178645 1550381 cri.go:89] found id: ""
	I1218 01:49:24.178669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.178677 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:24.178684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:24.178742 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:24.202721 1550381 cri.go:89] found id: ""
	I1218 01:49:24.202744 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.202753 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:24.202765 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:24.202827 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:24.228231 1550381 cri.go:89] found id: ""
	I1218 01:49:24.228255 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.228264 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:24.228271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:24.228334 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:24.252564 1550381 cri.go:89] found id: ""
	I1218 01:49:24.252585 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.252593 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:24.252599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:24.252682 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:24.282899 1550381 cri.go:89] found id: ""
	I1218 01:49:24.282975 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.283000 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:24.283015 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:24.283027 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:24.340471 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:24.340506 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:24.355477 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:24.355511 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:24.448676 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:24.448701 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:24.448720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:24.484800 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:24.484875 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:27.016359 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:27.027204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:27.027276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:27.054358 1550381 cri.go:89] found id: ""
	I1218 01:49:27.054383 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.054392 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:27.054398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:27.054456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:27.079191 1550381 cri.go:89] found id: ""
	I1218 01:49:27.079219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.079228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:27.079234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:27.079297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:27.104834 1550381 cri.go:89] found id: ""
	I1218 01:49:27.104856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.104865 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:27.104871 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:27.104943 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:27.134064 1550381 cri.go:89] found id: ""
	I1218 01:49:27.134138 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.134154 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:27.134161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:27.134227 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:27.159891 1550381 cri.go:89] found id: ""
	I1218 01:49:27.159915 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.159925 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:27.159931 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:27.159990 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:27.186008 1550381 cri.go:89] found id: ""
	I1218 01:49:27.186035 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.186044 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:27.186050 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:27.186135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:27.211311 1550381 cri.go:89] found id: ""
	I1218 01:49:27.211337 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.211346 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:27.211352 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:27.211433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:27.236397 1550381 cri.go:89] found id: ""
	I1218 01:49:27.236431 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.236440 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:27.236450 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:27.236461 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:27.293966 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:27.294001 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:27.309317 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:27.309355 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:27.380717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:27.380737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:27.380749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:27.410136 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:27.410175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:29.955798 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:29.968674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:29.968788 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:29.996170 1550381 cri.go:89] found id: ""
	I1218 01:49:29.996197 1550381 logs.go:282] 0 containers: []
	W1218 01:49:29.996208 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:29.996214 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:29.996276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:30.036959 1550381 cri.go:89] found id: ""
	I1218 01:49:30.036983 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.036992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:30.036999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:30.037067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:30.069036 1550381 cri.go:89] found id: ""
	I1218 01:49:30.069065 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.069076 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:30.069092 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:30.069231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:30.098534 1550381 cri.go:89] found id: ""
	I1218 01:49:30.098559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.098568 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:30.098575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:30.098637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:30.127481 1550381 cri.go:89] found id: ""
	I1218 01:49:30.127506 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.127515 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:30.127521 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:30.127588 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:30.153748 1550381 cri.go:89] found id: ""
	I1218 01:49:30.153773 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.153782 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:30.153789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:30.153872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:30.178887 1550381 cri.go:89] found id: ""
	I1218 01:49:30.178913 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.178922 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:30.178929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:30.179010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:30.204533 1550381 cri.go:89] found id: ""
	I1218 01:49:30.204559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.204568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:30.204578 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:30.204589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:30.260146 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:30.260180 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:30.275037 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:30.275067 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:30.338959 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:30.338978 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:30.338990 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:30.364082 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:30.364116 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:32.906096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:32.916660 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:32.916731 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:32.940216 1550381 cri.go:89] found id: ""
	I1218 01:49:32.940238 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.940247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:32.940254 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:32.940314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:32.967934 1550381 cri.go:89] found id: ""
	I1218 01:49:32.967956 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.967963 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:32.967970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:32.968027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:32.991930 1550381 cri.go:89] found id: ""
	I1218 01:49:32.991952 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.991961 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:32.991968 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:32.992027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:33.018215 1550381 cri.go:89] found id: ""
	I1218 01:49:33.018280 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.018303 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:33.018322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:33.018416 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:33.046738 1550381 cri.go:89] found id: ""
	I1218 01:49:33.046783 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.046794 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:33.046801 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:33.046873 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:33.072642 1550381 cri.go:89] found id: ""
	I1218 01:49:33.072669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.072678 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:33.072684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:33.072743 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:33.097687 1550381 cri.go:89] found id: ""
	I1218 01:49:33.097713 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.097722 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:33.097729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:33.097980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:33.125010 1550381 cri.go:89] found id: ""
	I1218 01:49:33.125090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.125107 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:33.125118 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:33.125134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:33.139761 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:33.139795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:33.204966 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:33.204990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:33.205002 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:33.230884 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:33.230929 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:33.263709 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:33.263739 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:35.820022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:35.830483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:35.830552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:35.855134 1550381 cri.go:89] found id: ""
	I1218 01:49:35.855161 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.855170 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:35.855177 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:35.855239 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:35.881968 1550381 cri.go:89] found id: ""
	I1218 01:49:35.881997 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.882006 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:35.882013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:35.882074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:35.907456 1550381 cri.go:89] found id: ""
	I1218 01:49:35.907481 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.907490 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:35.907496 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:35.907555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:35.936819 1550381 cri.go:89] found id: ""
	I1218 01:49:35.936845 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.936854 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:35.936860 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:35.936939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:35.961081 1550381 cri.go:89] found id: ""
	I1218 01:49:35.961107 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.961116 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:35.961123 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:35.961187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:35.985065 1550381 cri.go:89] found id: ""
	I1218 01:49:35.985091 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.985100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:35.985106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:35.985189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:36.013869 1550381 cri.go:89] found id: ""
	I1218 01:49:36.013894 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.013903 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:36.013909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:36.013972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:36.039260 1550381 cri.go:89] found id: ""
	I1218 01:49:36.039283 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.039291 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:36.039300 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:36.039312 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:36.069571 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:36.069659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:36.126151 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:36.126186 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:36.141484 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:36.141514 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:36.209837 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:36.209870 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:36.209883 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:38.735237 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:38.746104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:38.746193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:38.772225 1550381 cri.go:89] found id: ""
	I1218 01:49:38.772252 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.772261 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:38.772268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:38.772330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:38.797393 1550381 cri.go:89] found id: ""
	I1218 01:49:38.797420 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.797429 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:38.797435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:38.797498 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:38.822824 1550381 cri.go:89] found id: ""
	I1218 01:49:38.822847 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.822859 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:38.822868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:38.822927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:38.847877 1550381 cri.go:89] found id: ""
	I1218 01:49:38.847910 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.847919 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:38.847925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:38.847985 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:38.874529 1550381 cri.go:89] found id: ""
	I1218 01:49:38.874555 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.874564 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:38.874570 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:38.874655 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:38.902339 1550381 cri.go:89] found id: ""
	I1218 01:49:38.902406 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.902429 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:38.902447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:38.902535 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:38.927712 1550381 cri.go:89] found id: ""
	I1218 01:49:38.927745 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.927754 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:38.927761 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:38.927830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:38.954870 1550381 cri.go:89] found id: ""
	I1218 01:49:38.954937 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.954964 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:38.954986 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:38.955069 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:39.010028 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:39.010080 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:39.025363 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:39.025392 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:39.091129 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:39.091201 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:39.091221 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:39.116775 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:39.116809 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.650913 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:41.662276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:41.662344 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:41.731218 1550381 cri.go:89] found id: ""
	I1218 01:49:41.731246 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.731255 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:41.731261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:41.731319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:41.756567 1550381 cri.go:89] found id: ""
	I1218 01:49:41.756665 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.756680 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:41.756686 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:41.756755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:41.785421 1550381 cri.go:89] found id: ""
	I1218 01:49:41.785449 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.785458 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:41.785464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:41.785522 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:41.810479 1550381 cri.go:89] found id: ""
	I1218 01:49:41.810501 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.810510 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:41.810524 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:41.810590 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:41.835839 1550381 cri.go:89] found id: ""
	I1218 01:49:41.835863 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.835872 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:41.835878 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:41.835940 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:41.864064 1550381 cri.go:89] found id: ""
	I1218 01:49:41.864092 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.864100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:41.864106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:41.864162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:41.889810 1550381 cri.go:89] found id: ""
	I1218 01:49:41.889880 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.889911 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:41.889924 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:41.889997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:41.913756 1550381 cri.go:89] found id: ""
	I1218 01:49:41.913824 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.913849 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:41.913871 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:41.913902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.943258 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:41.943283 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:41.998631 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:41.998673 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:42.016861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:42.016892 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:42.086550 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:42.086592 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:42.086609 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.616940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:44.627561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:44.627705 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:44.700300 1550381 cri.go:89] found id: ""
	I1218 01:49:44.700322 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.700331 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:44.700337 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:44.700396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:44.736586 1550381 cri.go:89] found id: ""
	I1218 01:49:44.736669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.736685 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:44.736693 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:44.736760 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:44.760996 1550381 cri.go:89] found id: ""
	I1218 01:49:44.761020 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.761029 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:44.761035 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:44.761102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:44.786601 1550381 cri.go:89] found id: ""
	I1218 01:49:44.786637 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.786646 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:44.786655 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:44.786723 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:44.812292 1550381 cri.go:89] found id: ""
	I1218 01:49:44.812314 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.812322 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:44.812329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:44.812415 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:44.838185 1550381 cri.go:89] found id: ""
	I1218 01:49:44.838219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.838229 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:44.838236 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:44.838298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:44.867060 1550381 cri.go:89] found id: ""
	I1218 01:49:44.867081 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.867089 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:44.867095 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:44.867151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:44.892070 1550381 cri.go:89] found id: ""
	I1218 01:49:44.892099 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.892108 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:44.892117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:44.892133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:44.906549 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:44.906575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:44.971842 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:44.971863 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:44.971877 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.997318 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:44.997352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:45.078604 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:45.078658 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.669132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:47.684661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:47.684728 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:47.724476 1550381 cri.go:89] found id: ""
	I1218 01:49:47.724498 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.724509 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:47.724515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:47.724576 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:47.758012 1550381 cri.go:89] found id: ""
	I1218 01:49:47.758036 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.758044 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:47.758051 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:47.758109 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:47.786154 1550381 cri.go:89] found id: ""
	I1218 01:49:47.786180 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.786189 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:47.786196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:47.786258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:47.810902 1550381 cri.go:89] found id: ""
	I1218 01:49:47.810928 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.810937 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:47.810944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:47.811003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:47.836006 1550381 cri.go:89] found id: ""
	I1218 01:49:47.836032 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.836040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:47.836049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:47.836119 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:47.861054 1550381 cri.go:89] found id: ""
	I1218 01:49:47.861078 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.861087 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:47.861094 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:47.861167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:47.889731 1550381 cri.go:89] found id: ""
	I1218 01:49:47.889756 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.889765 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:47.889772 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:47.889829 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:47.918028 1550381 cri.go:89] found id: ""
	I1218 01:49:47.918055 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.918064 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:47.918073 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:47.918090 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.972822 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:47.972860 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:47.987701 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:47.987730 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:48.055884 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:48.055906 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:48.055919 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:48.081983 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:48.082021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.614399 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:50.625532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:50.625607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:50.669636 1550381 cri.go:89] found id: ""
	I1218 01:49:50.669663 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.669672 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:50.669678 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:50.669737 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:50.731793 1550381 cri.go:89] found id: ""
	I1218 01:49:50.731820 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.731829 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:50.731835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:50.731903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:50.758384 1550381 cri.go:89] found id: ""
	I1218 01:49:50.758407 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.758416 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:50.758422 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:50.758481 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:50.783123 1550381 cri.go:89] found id: ""
	I1218 01:49:50.783148 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.783157 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:50.783163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:50.783224 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:50.807986 1550381 cri.go:89] found id: ""
	I1218 01:49:50.808010 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.808019 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:50.808026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:50.808084 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:50.833014 1550381 cri.go:89] found id: ""
	I1218 01:49:50.833037 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.833058 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:50.833066 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:50.833125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:50.857525 1550381 cri.go:89] found id: ""
	I1218 01:49:50.857551 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.857560 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:50.857567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:50.857631 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:50.882511 1550381 cri.go:89] found id: ""
	I1218 01:49:50.882535 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.882543 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:50.882552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:50.882565 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.916936 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:50.916963 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:50.972064 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:50.972098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:50.987003 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:50.987031 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:51.056796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:51.056817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:51.056829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:53.582769 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:53.594237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:53.594316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:53.619778 1550381 cri.go:89] found id: ""
	I1218 01:49:53.619800 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.619809 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:53.619815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:53.619877 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:53.677064 1550381 cri.go:89] found id: ""
	I1218 01:49:53.677087 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.677097 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:53.677103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:53.677179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:53.733772 1550381 cri.go:89] found id: ""
	I1218 01:49:53.733798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.733808 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:53.733815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:53.733876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:53.759569 1550381 cri.go:89] found id: ""
	I1218 01:49:53.759594 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.759603 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:53.759609 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:53.759667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:53.785969 1550381 cri.go:89] found id: ""
	I1218 01:49:53.785993 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.786002 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:53.786008 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:53.786072 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:53.810819 1550381 cri.go:89] found id: ""
	I1218 01:49:53.810843 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.810851 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:53.810858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:53.810923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:53.836207 1550381 cri.go:89] found id: ""
	I1218 01:49:53.836271 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.836295 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:53.836314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:53.836395 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:53.860468 1550381 cri.go:89] found id: ""
	I1218 01:49:53.860499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.860508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:53.860518 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:53.860537 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:53.917328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:53.917365 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:53.932367 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:53.932407 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:54.001703 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:54.001723 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:54.001737 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:54.030548 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:54.030584 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.561340 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:56.571927 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:56.571998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:56.595966 1550381 cri.go:89] found id: ""
	I1218 01:49:56.595996 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.596006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:56.596012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:56.596073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:56.620113 1550381 cri.go:89] found id: ""
	I1218 01:49:56.620136 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.620145 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:56.620151 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:56.620211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:56.655375 1550381 cri.go:89] found id: ""
	I1218 01:49:56.655401 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.655410 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:56.655417 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:56.655477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:56.711903 1550381 cri.go:89] found id: ""
	I1218 01:49:56.711931 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.711940 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:56.711946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:56.712007 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:56.748501 1550381 cri.go:89] found id: ""
	I1218 01:49:56.748527 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.748536 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:56.748542 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:56.748600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:56.774097 1550381 cri.go:89] found id: ""
	I1218 01:49:56.774121 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.774130 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:56.774137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:56.774196 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:56.802594 1550381 cri.go:89] found id: ""
	I1218 01:49:56.802618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.802627 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:56.802633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:56.802690 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:56.827592 1550381 cri.go:89] found id: ""
	I1218 01:49:56.827615 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.827623 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:56.827633 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:56.827645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:56.852403 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:56.852433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.880076 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:56.880109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:56.935675 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:56.935712 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:56.950522 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:56.950549 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:57.019412 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.521100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:59.531832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:59.531908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:59.557309 1550381 cri.go:89] found id: ""
	I1218 01:49:59.557333 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.557342 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:59.557349 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:59.557406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:59.581813 1550381 cri.go:89] found id: ""
	I1218 01:49:59.581889 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.581911 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:59.581919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:59.581978 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:59.605979 1550381 cri.go:89] found id: ""
	I1218 01:49:59.606003 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.606012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:59.606018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:59.606101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:59.631076 1550381 cri.go:89] found id: ""
	I1218 01:49:59.631101 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.631110 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:59.631117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:59.631210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:59.670164 1550381 cri.go:89] found id: ""
	I1218 01:49:59.670189 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.670198 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:59.670205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:59.670309 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:59.706830 1550381 cri.go:89] found id: ""
	I1218 01:49:59.706856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.706865 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:59.706872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:59.706953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:59.739787 1550381 cri.go:89] found id: ""
	I1218 01:49:59.739815 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.739824 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:59.739830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:59.739892 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:59.766523 1550381 cri.go:89] found id: ""
	I1218 01:49:59.766548 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.766558 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:59.766568 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:59.766579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:59.822153 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:59.822193 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:59.837991 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:59.838016 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:59.905967 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.905990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:59.906003 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:59.931368 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:59.931401 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:02.467452 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:02.478157 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:02.478230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:02.504286 1550381 cri.go:89] found id: ""
	I1218 01:50:02.504311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.504321 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:02.504328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:02.504390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:02.530207 1550381 cri.go:89] found id: ""
	I1218 01:50:02.530232 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.530242 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:02.530249 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:02.530308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:02.561278 1550381 cri.go:89] found id: ""
	I1218 01:50:02.561305 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.561314 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:02.561320 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:02.561383 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:02.586119 1550381 cri.go:89] found id: ""
	I1218 01:50:02.586144 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.586153 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:02.586159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:02.586218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:02.611212 1550381 cri.go:89] found id: ""
	I1218 01:50:02.611239 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.611249 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:02.611256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:02.611317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:02.638670 1550381 cri.go:89] found id: ""
	I1218 01:50:02.638697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.638705 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:02.638715 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:02.638819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:02.699868 1550381 cri.go:89] found id: ""
	I1218 01:50:02.699897 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.699906 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:02.699913 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:02.699971 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:02.753340 1550381 cri.go:89] found id: ""
	I1218 01:50:02.753371 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.753381 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:02.753391 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:02.753402 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:02.809735 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:02.809769 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:02.825241 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:02.825271 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:02.894096 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:02.894118 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:02.894130 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:02.919985 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:02.920021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:05.450883 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:05.461914 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:05.461989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:05.487197 1550381 cri.go:89] found id: ""
	I1218 01:50:05.487221 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.487230 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:05.487237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:05.487297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:05.513273 1550381 cri.go:89] found id: ""
	I1218 01:50:05.513304 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.513313 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:05.513321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:05.513385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:05.544168 1550381 cri.go:89] found id: ""
	I1218 01:50:05.544191 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.544200 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:05.544206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:05.544306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:05.570574 1550381 cri.go:89] found id: ""
	I1218 01:50:05.570597 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.570607 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:05.570613 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:05.570675 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:05.598812 1550381 cri.go:89] found id: ""
	I1218 01:50:05.598837 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.598845 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:05.598852 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:05.598915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:05.628314 1550381 cri.go:89] found id: ""
	I1218 01:50:05.628339 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.628348 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:05.628354 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:05.628418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:05.665714 1550381 cri.go:89] found id: ""
	I1218 01:50:05.665742 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.665751 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:05.665757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:05.665817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:05.733576 1550381 cri.go:89] found id: ""
	I1218 01:50:05.733603 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.733624 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:05.733634 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:05.733652 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:05.795404 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:05.795439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:05.811319 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:05.811347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:05.878494 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:05.878517 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:05.878532 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:05.904153 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:05.904185 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.433275 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:08.443880 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:08.443983 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:08.468382 1550381 cri.go:89] found id: ""
	I1218 01:50:08.468408 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.468417 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:08.468424 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:08.468483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:08.498576 1550381 cri.go:89] found id: ""
	I1218 01:50:08.498629 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.498656 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:08.498662 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:08.498764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:08.524767 1550381 cri.go:89] found id: ""
	I1218 01:50:08.524790 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.524799 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:08.524806 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:08.524868 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:08.551353 1550381 cri.go:89] found id: ""
	I1218 01:50:08.551380 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.551399 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:08.551406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:08.551482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:08.577687 1550381 cri.go:89] found id: ""
	I1218 01:50:08.577713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.577722 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:08.577729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:08.577816 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:08.603410 1550381 cri.go:89] found id: ""
	I1218 01:50:08.603434 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.603443 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:08.603450 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:08.603530 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:08.630799 1550381 cri.go:89] found id: ""
	I1218 01:50:08.630824 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.630833 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:08.630840 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:08.630903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:08.705200 1550381 cri.go:89] found id: ""
	I1218 01:50:08.705228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.705237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:08.705247 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:08.705260 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:08.733020 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:08.733047 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:08.798171 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:08.798195 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:08.798217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:08.823651 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:08.823682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.851693 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:08.851720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.407503 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:11.418083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:11.418157 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:11.443131 1550381 cri.go:89] found id: ""
	I1218 01:50:11.443153 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.443161 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:11.443167 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:11.443225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:11.468456 1550381 cri.go:89] found id: ""
	I1218 01:50:11.468480 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.468489 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:11.468495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:11.468559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:11.494875 1550381 cri.go:89] found id: ""
	I1218 01:50:11.494900 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.494910 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:11.494916 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:11.494976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:11.522672 1550381 cri.go:89] found id: ""
	I1218 01:50:11.522695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.522703 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:11.522710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:11.522774 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:11.550689 1550381 cri.go:89] found id: ""
	I1218 01:50:11.550713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.550723 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:11.550729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:11.550789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:11.579573 1550381 cri.go:89] found id: ""
	I1218 01:50:11.579600 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.579608 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:11.579615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:11.579677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:11.605240 1550381 cri.go:89] found id: ""
	I1218 01:50:11.605265 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.605274 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:11.605281 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:11.605348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:11.631171 1550381 cri.go:89] found id: ""
	I1218 01:50:11.631198 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.631208 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:11.631217 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:11.631228 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:11.709937 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:11.709969 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.779988 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:11.780023 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:11.795215 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:11.795243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:11.862143 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:11.862165 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:11.862177 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.389878 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:14.400681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:14.400756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:14.427103 1550381 cri.go:89] found id: ""
	I1218 01:50:14.427127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.427136 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:14.427142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:14.427200 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:14.455157 1550381 cri.go:89] found id: ""
	I1218 01:50:14.455180 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.455189 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:14.455195 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:14.455260 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:14.481712 1550381 cri.go:89] found id: ""
	I1218 01:50:14.481738 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.481752 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:14.481759 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:14.481821 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:14.506286 1550381 cri.go:89] found id: ""
	I1218 01:50:14.506312 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.506320 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:14.506327 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:14.506385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:14.531764 1550381 cri.go:89] found id: ""
	I1218 01:50:14.531789 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.531797 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:14.531804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:14.531864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:14.556792 1550381 cri.go:89] found id: ""
	I1218 01:50:14.556817 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.556826 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:14.556832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:14.556896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:14.581496 1550381 cri.go:89] found id: ""
	I1218 01:50:14.581521 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.581531 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:14.581537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:14.581603 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:14.605950 1550381 cri.go:89] found id: ""
	I1218 01:50:14.605973 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.605982 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:14.605992 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:14.606007 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.631804 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:14.631838 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:14.684967 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:14.685004 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:14.769991 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:14.770039 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:14.785356 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:14.785391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:14.851585 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.353376 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:17.364408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:17.364479 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:17.389035 1550381 cri.go:89] found id: ""
	I1218 01:50:17.389062 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.389071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:17.389077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:17.389141 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:17.418594 1550381 cri.go:89] found id: ""
	I1218 01:50:17.418620 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.418628 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:17.418634 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:17.418693 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:17.444908 1550381 cri.go:89] found id: ""
	I1218 01:50:17.444930 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.444938 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:17.444945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:17.445006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:17.470076 1550381 cri.go:89] found id: ""
	I1218 01:50:17.470100 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.470109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:17.470117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:17.470178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:17.494949 1550381 cri.go:89] found id: ""
	I1218 01:50:17.494972 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.494984 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:17.494992 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:17.495050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:17.523740 1550381 cri.go:89] found id: ""
	I1218 01:50:17.523767 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.523775 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:17.523782 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:17.523840 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:17.551184 1550381 cri.go:89] found id: ""
	I1218 01:50:17.551212 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.551220 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:17.551227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:17.551290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:17.576421 1550381 cri.go:89] found id: ""
	I1218 01:50:17.576446 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.576454 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:17.576464 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:17.576476 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:17.640879 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.640898 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:17.640911 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:17.719096 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:17.719184 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:17.749240 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:17.749266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:17.804542 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:17.804581 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.319731 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:20.329891 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:20.329962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:20.353449 1550381 cri.go:89] found id: ""
	I1218 01:50:20.353471 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.353479 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:20.353485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:20.353542 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:20.378067 1550381 cri.go:89] found id: ""
	I1218 01:50:20.378089 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.378098 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:20.378104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:20.378162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:20.403262 1550381 cri.go:89] found id: ""
	I1218 01:50:20.403288 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.403297 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:20.403304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:20.403362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:20.430817 1550381 cri.go:89] found id: ""
	I1218 01:50:20.430842 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.430851 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:20.430858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:20.430916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:20.456026 1550381 cri.go:89] found id: ""
	I1218 01:50:20.456049 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.456057 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:20.456064 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:20.456123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:20.485362 1550381 cri.go:89] found id: ""
	I1218 01:50:20.485388 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.485397 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:20.485404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:20.485461 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:20.509757 1550381 cri.go:89] found id: ""
	I1218 01:50:20.509779 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.509788 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:20.509794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:20.509851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:20.540098 1550381 cri.go:89] found id: ""
	I1218 01:50:20.540122 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.540130 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:20.540139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:20.540151 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:20.597234 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:20.597269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.611800 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:20.611826 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:20.741195 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:20.741222 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:20.741235 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:20.766650 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:20.766689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:23.295459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:23.306363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:23.306450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:23.331822 1550381 cri.go:89] found id: ""
	I1218 01:50:23.331848 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.331857 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:23.331864 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:23.331925 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:23.357194 1550381 cri.go:89] found id: ""
	I1218 01:50:23.357219 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.357228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:23.357234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:23.357293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:23.383201 1550381 cri.go:89] found id: ""
	I1218 01:50:23.383228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.383238 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:23.383245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:23.383306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:23.409593 1550381 cri.go:89] found id: ""
	I1218 01:50:23.409619 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.409628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:23.409636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:23.409694 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:23.434134 1550381 cri.go:89] found id: ""
	I1218 01:50:23.434157 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.434167 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:23.434173 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:23.434231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:23.458615 1550381 cri.go:89] found id: ""
	I1218 01:50:23.458637 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.458645 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:23.458652 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:23.458714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:23.483411 1550381 cri.go:89] found id: ""
	I1218 01:50:23.483433 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.483441 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:23.483447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:23.483505 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:23.510673 1550381 cri.go:89] found id: ""
	I1218 01:50:23.510697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.510707 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:23.510716 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:23.510727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:23.569129 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:23.569169 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:23.583622 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:23.583654 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:23.660608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:23.660646 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:23.660659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:23.689685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:23.689724 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:26.245910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:26.256314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:26.256387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:26.281224 1550381 cri.go:89] found id: ""
	I1218 01:50:26.281247 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.281257 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:26.281263 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:26.281331 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:26.310540 1550381 cri.go:89] found id: ""
	I1218 01:50:26.310567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.310576 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:26.310583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:26.310642 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:26.336372 1550381 cri.go:89] found id: ""
	I1218 01:50:26.336399 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.336407 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:26.336413 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:26.336473 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:26.362095 1550381 cri.go:89] found id: ""
	I1218 01:50:26.362120 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.362129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:26.362135 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:26.362199 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:26.387399 1550381 cri.go:89] found id: ""
	I1218 01:50:26.387424 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.387433 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:26.387439 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:26.387502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:26.412769 1550381 cri.go:89] found id: ""
	I1218 01:50:26.412794 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.412803 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:26.412809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:26.412878 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:26.437098 1550381 cri.go:89] found id: ""
	I1218 01:50:26.437124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.437132 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:26.437139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:26.437223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:26.462717 1550381 cri.go:89] found id: ""
	I1218 01:50:26.462744 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.462754 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:26.462764 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:26.462782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:26.521734 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:26.521768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:26.536748 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:26.536777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:26.603709 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:26.603730 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:26.603749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:26.632522 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:26.632599 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.191094 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:29.202310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:29.202386 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:29.227851 1550381 cri.go:89] found id: ""
	I1218 01:50:29.227878 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.227887 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:29.227893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:29.227960 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:29.257631 1550381 cri.go:89] found id: ""
	I1218 01:50:29.257656 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.257665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:29.257671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:29.257740 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:29.283590 1550381 cri.go:89] found id: ""
	I1218 01:50:29.283615 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.283625 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:29.283631 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:29.283696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:29.311410 1550381 cri.go:89] found id: ""
	I1218 01:50:29.311436 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.311445 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:29.311452 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:29.311517 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:29.342669 1550381 cri.go:89] found id: ""
	I1218 01:50:29.342695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.342714 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:29.342721 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:29.342815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:29.367296 1550381 cri.go:89] found id: ""
	I1218 01:50:29.367321 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.367330 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:29.367336 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:29.367396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:29.392236 1550381 cri.go:89] found id: ""
	I1218 01:50:29.392260 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.392269 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:29.392275 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:29.392336 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:29.417512 1550381 cri.go:89] found id: ""
	I1218 01:50:29.417538 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.417547 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:29.417556 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:29.417594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:29.488248 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:29.488272 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:29.488289 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:29.513850 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:29.513884 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.543041 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:29.543071 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:29.602048 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:29.602087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.117433 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:32.128498 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:32.128589 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:32.153547 1550381 cri.go:89] found id: ""
	I1218 01:50:32.153571 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.153580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:32.153587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:32.153647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:32.178431 1550381 cri.go:89] found id: ""
	I1218 01:50:32.178455 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.178464 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:32.178471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:32.178529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:32.203336 1550381 cri.go:89] found id: ""
	I1218 01:50:32.203362 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.203371 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:32.203377 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:32.203434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:32.230677 1550381 cri.go:89] found id: ""
	I1218 01:50:32.230702 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.230712 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:32.230718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:32.230800 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:32.255544 1550381 cri.go:89] found id: ""
	I1218 01:50:32.255567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.255576 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:32.255583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:32.255661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:32.282405 1550381 cri.go:89] found id: ""
	I1218 01:50:32.282468 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.282486 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:32.282493 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:32.282551 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:32.311100 1550381 cri.go:89] found id: ""
	I1218 01:50:32.311124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.311133 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:32.311139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:32.311195 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:32.339521 1550381 cri.go:89] found id: ""
	I1218 01:50:32.339550 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.339559 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:32.339568 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:32.339579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:32.364381 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:32.364417 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:32.396991 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:32.397017 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:32.453109 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:32.453144 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.468129 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:32.468158 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:32.534370 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.036282 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:35.048487 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:35.048567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:35.076340 1550381 cri.go:89] found id: ""
	I1218 01:50:35.076365 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.076373 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:35.076386 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:35.076451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:35.104187 1550381 cri.go:89] found id: ""
	I1218 01:50:35.104211 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.104221 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:35.104227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:35.104290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:35.131465 1550381 cri.go:89] found id: ""
	I1218 01:50:35.131536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.131563 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:35.131583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:35.131672 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:35.158198 1550381 cri.go:89] found id: ""
	I1218 01:50:35.158264 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.158281 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:35.158289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:35.158352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:35.185390 1550381 cri.go:89] found id: ""
	I1218 01:50:35.185462 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.185476 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:35.185483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:35.185555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:35.215800 1550381 cri.go:89] found id: ""
	I1218 01:50:35.215893 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.215919 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:35.215946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:35.216046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:35.243559 1550381 cri.go:89] found id: ""
	I1218 01:50:35.243627 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.243652 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:35.243671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:35.243748 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:35.272051 1550381 cri.go:89] found id: ""
	I1218 01:50:35.272079 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.272088 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:35.272099 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:35.272110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:35.328789 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:35.328829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:35.343746 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:35.343791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:35.410255 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.410278 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:35.410290 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:35.436151 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:35.436194 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:37.964765 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:37.975595 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:37.975668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:38.006140 1550381 cri.go:89] found id: ""
	I1218 01:50:38.006168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.006179 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:38.006186 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:38.006254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:38.032670 1550381 cri.go:89] found id: ""
	I1218 01:50:38.032696 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.032704 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:38.032711 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:38.032789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:38.058961 1550381 cri.go:89] found id: ""
	I1218 01:50:38.058991 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.059004 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:38.059013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:38.059086 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:38.093028 1550381 cri.go:89] found id: ""
	I1218 01:50:38.093053 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.093062 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:38.093069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:38.093130 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:38.118000 1550381 cri.go:89] found id: ""
	I1218 01:50:38.118024 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.118033 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:38.118040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:38.118099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:38.143582 1550381 cri.go:89] found id: ""
	I1218 01:50:38.143609 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.143620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:38.143627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:38.143687 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:38.170663 1550381 cri.go:89] found id: ""
	I1218 01:50:38.170692 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.170701 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:38.170707 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:38.170773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:38.195587 1550381 cri.go:89] found id: ""
	I1218 01:50:38.195610 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.195619 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:38.195629 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:38.195640 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:38.250718 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:38.250757 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:38.265740 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:38.265766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:38.332572 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:38.332602 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:38.332653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:38.358827 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:38.358864 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:40.892874 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:40.912835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:40.912911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:40.974270 1550381 cri.go:89] found id: ""
	I1218 01:50:40.974363 1550381 logs.go:282] 0 containers: []
	W1218 01:50:40.974391 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:40.974427 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:40.974538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:41.009749 1550381 cri.go:89] found id: ""
	I1218 01:50:41.009826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.009862 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:41.009893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:41.009999 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:41.036864 1550381 cri.go:89] found id: ""
	I1218 01:50:41.036933 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.036959 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:41.036974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:41.037050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:41.062681 1550381 cri.go:89] found id: ""
	I1218 01:50:41.062708 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.062717 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:41.062723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:41.062785 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:41.088510 1550381 cri.go:89] found id: ""
	I1218 01:50:41.088537 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.088562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:41.088569 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:41.088677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:41.113288 1550381 cri.go:89] found id: ""
	I1218 01:50:41.113311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.113321 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:41.113328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:41.113431 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:41.138413 1550381 cri.go:89] found id: ""
	I1218 01:50:41.138438 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.138447 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:41.138453 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:41.138510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:41.164559 1550381 cri.go:89] found id: ""
	I1218 01:50:41.164592 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.164601 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:41.164612 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:41.164655 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:41.220220 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:41.220257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:41.235147 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:41.235175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:41.301835 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:41.301860 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:41.301873 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:41.327289 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:41.327322 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:43.855149 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:43.865567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:43.865639 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:43.901178 1550381 cri.go:89] found id: ""
	I1218 01:50:43.901222 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.901231 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:43.901237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:43.901308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:43.975051 1550381 cri.go:89] found id: ""
	I1218 01:50:43.975085 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.975095 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:43.975103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:43.975175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:44.002012 1550381 cri.go:89] found id: ""
	I1218 01:50:44.002051 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.002062 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:44.002069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:44.002155 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:44.029977 1550381 cri.go:89] found id: ""
	I1218 01:50:44.030055 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.030090 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:44.030122 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:44.030212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:44.055154 1550381 cri.go:89] found id: ""
	I1218 01:50:44.055182 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.055199 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:44.055206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:44.055264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:44.080010 1550381 cri.go:89] found id: ""
	I1218 01:50:44.080081 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.080118 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:44.080142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:44.080234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:44.106566 1550381 cri.go:89] found id: ""
	I1218 01:50:44.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.106599 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:44.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:44.106685 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:44.130836 1550381 cri.go:89] found id: ""
	I1218 01:50:44.130864 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.130873 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:44.130883 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:44.130894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:44.185795 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:44.185833 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:44.200138 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:44.200164 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:44.265688 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:44.265760 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:44.265786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:44.290625 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:44.290662 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:46.817986 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:46.829340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:46.829433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:46.854080 1550381 cri.go:89] found id: ""
	I1218 01:50:46.854105 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.854113 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:46.854121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:46.854178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:46.894044 1550381 cri.go:89] found id: ""
	I1218 01:50:46.894069 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.894078 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:46.894084 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:46.894144 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:46.979469 1550381 cri.go:89] found id: ""
	I1218 01:50:46.979536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.979561 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:46.979580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:46.979670 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:47.007329 1550381 cri.go:89] found id: ""
	I1218 01:50:47.007393 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.007416 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:47.007435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:47.007524 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:47.036488 1550381 cri.go:89] found id: ""
	I1218 01:50:47.036515 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.036530 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:47.036537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:47.036600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:47.061288 1550381 cri.go:89] found id: ""
	I1218 01:50:47.061318 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.061327 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:47.061334 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:47.061394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:47.086889 1550381 cri.go:89] found id: ""
	I1218 01:50:47.086916 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.086925 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:47.086932 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:47.086995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:47.111795 1550381 cri.go:89] found id: ""
	I1218 01:50:47.111826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.111835 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:47.111844 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:47.111855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:47.166527 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:47.166560 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:47.184211 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:47.184238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:47.251953 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:47.251974 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:47.251986 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:47.277100 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:47.277134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:49.805362 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:49.816269 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:49.816341 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:49.843797 1550381 cri.go:89] found id: ""
	I1218 01:50:49.843820 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.843828 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:49.843834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:49.843894 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:49.869725 1550381 cri.go:89] found id: ""
	I1218 01:50:49.869751 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.869760 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:49.869766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:49.869826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:49.913079 1550381 cri.go:89] found id: ""
	I1218 01:50:49.913102 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.913110 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:49.913117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:49.913175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:49.978366 1550381 cri.go:89] found id: ""
	I1218 01:50:49.978456 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.978481 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:49.978506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:49.978669 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:50.015889 1550381 cri.go:89] found id: ""
	I1218 01:50:50.015961 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.015995 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:50.016015 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:50.016118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:50.043973 1550381 cri.go:89] found id: ""
	I1218 01:50:50.044008 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.044020 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:50.044028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:50.044097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:50.071368 1550381 cri.go:89] found id: ""
	I1218 01:50:50.071397 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.071407 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:50.071415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:50.071492 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:50.100352 1550381 cri.go:89] found id: ""
	I1218 01:50:50.100381 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.100392 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:50.100402 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:50.100414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:50.157120 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:50.157156 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:50.171935 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:50.171962 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:50.243754 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:50.243779 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:50.243792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:50.271841 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:50.271895 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:52.801073 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:52.811866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:52.811938 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:52.841370 1550381 cri.go:89] found id: ""
	I1218 01:50:52.841396 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.841404 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:52.841411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:52.841477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:52.866527 1550381 cri.go:89] found id: ""
	I1218 01:50:52.866549 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.866557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:52.866564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:52.866629 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:52.905295 1550381 cri.go:89] found id: ""
	I1218 01:50:52.905323 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.905333 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:52.905340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:52.905402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:52.976848 1550381 cri.go:89] found id: ""
	I1218 01:50:52.976871 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.976880 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:52.976886 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:52.976945 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:53.005921 1550381 cri.go:89] found id: ""
	I1218 01:50:53.005996 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.006013 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:53.006021 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:53.006096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:53.035172 1550381 cri.go:89] found id: ""
	I1218 01:50:53.035209 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.035219 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:53.035226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:53.035295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:53.062748 1550381 cri.go:89] found id: ""
	I1218 01:50:53.062816 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.062841 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:53.062856 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:53.062933 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:53.088160 1550381 cri.go:89] found id: ""
	I1218 01:50:53.088194 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.088203 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:53.088215 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:53.088227 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:53.143868 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:53.143906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:53.159169 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:53.159240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:53.226415 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:53.226438 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:53.226451 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:53.251410 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:53.251448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:55.783464 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:55.793844 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:55.793915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:55.822511 1550381 cri.go:89] found id: ""
	I1218 01:50:55.822543 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.822552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:55.822559 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:55.822630 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:55.852049 1550381 cri.go:89] found id: ""
	I1218 01:50:55.852076 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.852084 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:55.852090 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:55.852167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:55.877944 1550381 cri.go:89] found id: ""
	I1218 01:50:55.877974 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.877982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:55.877989 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:55.878045 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:55.964104 1550381 cri.go:89] found id: ""
	I1218 01:50:55.964127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.964136 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:55.964142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:55.964198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:55.989628 1550381 cri.go:89] found id: ""
	I1218 01:50:55.989658 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.989667 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:55.989681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:55.989752 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:56.024436 1550381 cri.go:89] found id: ""
	I1218 01:50:56.024465 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.024474 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:56.024480 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:56.024544 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:56.049953 1550381 cri.go:89] found id: ""
	I1218 01:50:56.050028 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.050045 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:56.050053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:56.050118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:56.075666 1550381 cri.go:89] found id: ""
	I1218 01:50:56.075711 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.075720 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:56.075729 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:56.075747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:56.141793 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:56.141818 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:56.141830 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:56.166981 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:56.167013 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:56.193749 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:56.193777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:56.248762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:56.248796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:58.763667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:58.773893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:58.773964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:58.801142 1550381 cri.go:89] found id: ""
	I1218 01:50:58.801168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.801177 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:58.801184 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:58.801255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:58.826909 1550381 cri.go:89] found id: ""
	I1218 01:50:58.826937 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.826946 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:58.826952 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:58.827011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:58.852298 1550381 cri.go:89] found id: ""
	I1218 01:50:58.852328 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.852337 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:58.852343 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:58.852402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:58.877078 1550381 cri.go:89] found id: ""
	I1218 01:50:58.877103 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.877112 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:58.877118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:58.877179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:58.908546 1550381 cri.go:89] found id: ""
	I1218 01:50:58.908572 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.908582 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:58.908588 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:58.908665 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:58.963294 1550381 cri.go:89] found id: ""
	I1218 01:50:58.963327 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.963336 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:58.963342 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:58.963408 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:59.004870 1550381 cri.go:89] found id: ""
	I1218 01:50:59.004907 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.004917 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:59.004923 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:59.004995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:59.030744 1550381 cri.go:89] found id: ""
	I1218 01:50:59.030812 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.030838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:59.030854 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:59.030866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:59.045546 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:59.045575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:59.112855 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:59.112876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:59.112888 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:59.137778 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:59.137857 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:59.165599 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:59.165624 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:01.723994 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:01.734966 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:01.735033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:01.759065 1550381 cri.go:89] found id: ""
	I1218 01:51:01.759093 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.759102 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:01.759108 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:01.759169 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:01.787378 1550381 cri.go:89] found id: ""
	I1218 01:51:01.787406 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.787416 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:01.787421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:01.787490 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:01.812815 1550381 cri.go:89] found id: ""
	I1218 01:51:01.812838 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.812847 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:01.812853 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:01.812912 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:01.838955 1550381 cri.go:89] found id: ""
	I1218 01:51:01.838981 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.838990 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:01.839003 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:01.839062 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:01.864230 1550381 cri.go:89] found id: ""
	I1218 01:51:01.864256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.864266 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:01.864273 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:01.864335 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:01.890158 1550381 cri.go:89] found id: ""
	I1218 01:51:01.890184 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.890193 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:01.890199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:01.890259 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:01.955214 1550381 cri.go:89] found id: ""
	I1218 01:51:01.955289 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.955313 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:01.955332 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:01.955421 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:01.997347 1550381 cri.go:89] found id: ""
	I1218 01:51:01.997414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.997439 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:01.997457 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:01.997469 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:02.054965 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:02.055055 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:02.074503 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:02.074555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:02.144467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:02.144499 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:02.144513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:02.170450 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:02.170493 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:04.704549 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:04.715641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:04.715714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:04.742904 1550381 cri.go:89] found id: ""
	I1218 01:51:04.742928 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.742937 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:04.742943 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:04.743002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:04.768296 1550381 cri.go:89] found id: ""
	I1218 01:51:04.768323 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.768332 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:04.768338 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:04.768400 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:04.794825 1550381 cri.go:89] found id: ""
	I1218 01:51:04.794859 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.794868 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:04.794888 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:04.794953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:04.820347 1550381 cri.go:89] found id: ""
	I1218 01:51:04.820375 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.820383 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:04.820390 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:04.820452 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:04.845796 1550381 cri.go:89] found id: ""
	I1218 01:51:04.845823 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.845832 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:04.845839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:04.845899 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:04.870392 1550381 cri.go:89] found id: ""
	I1218 01:51:04.870418 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.870426 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:04.870433 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:04.870495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:04.918945 1550381 cri.go:89] found id: ""
	I1218 01:51:04.918979 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.918988 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:04.918995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:04.919055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:04.974228 1550381 cri.go:89] found id: ""
	I1218 01:51:04.974255 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.974264 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:04.974273 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:04.974286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:05.042680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:05.042706 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:05.042719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:05.068392 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:05.068427 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:05.097162 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:05.097199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:05.155869 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:05.155910 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:07.671922 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:07.682619 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:07.682688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:07.707484 1550381 cri.go:89] found id: ""
	I1218 01:51:07.707512 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.707521 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:07.707528 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:07.707585 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:07.736732 1550381 cri.go:89] found id: ""
	I1218 01:51:07.736765 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.736774 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:07.736781 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:07.736841 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:07.761774 1550381 cri.go:89] found id: ""
	I1218 01:51:07.761800 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.761809 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:07.761815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:07.761876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:07.790605 1550381 cri.go:89] found id: ""
	I1218 01:51:07.790635 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.790644 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:07.790650 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:07.790714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:07.816203 1550381 cri.go:89] found id: ""
	I1218 01:51:07.816230 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.816239 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:07.816245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:07.816304 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:07.841127 1550381 cri.go:89] found id: ""
	I1218 01:51:07.841150 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.841159 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:07.841165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:07.841225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:07.865946 1550381 cri.go:89] found id: ""
	I1218 01:51:07.866010 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.866036 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:07.866053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:07.866143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:07.916531 1550381 cri.go:89] found id: ""
	I1218 01:51:07.916559 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.916568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:07.916578 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:07.916589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:07.983404 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:07.983433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:08.038790 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:08.038829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:08.055026 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:08.055100 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:08.121982 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:08.122053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:08.122079 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:10.648476 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:10.659206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:10.659275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:10.684487 1550381 cri.go:89] found id: ""
	I1218 01:51:10.684516 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.684525 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:10.684532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:10.684594 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:10.709248 1550381 cri.go:89] found id: ""
	I1218 01:51:10.709278 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.709288 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:10.709294 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:10.709354 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:10.733670 1550381 cri.go:89] found id: ""
	I1218 01:51:10.733700 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.733709 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:10.733716 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:10.733776 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:10.762711 1550381 cri.go:89] found id: ""
	I1218 01:51:10.762734 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.762748 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:10.762755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:10.762814 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:10.791896 1550381 cri.go:89] found id: ""
	I1218 01:51:10.791929 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.791938 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:10.791944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:10.792012 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:10.816916 1550381 cri.go:89] found id: ""
	I1218 01:51:10.816940 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.816951 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:10.816957 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:10.817018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:10.848467 1550381 cri.go:89] found id: ""
	I1218 01:51:10.848533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.848555 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:10.848575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:10.848684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:10.872632 1550381 cri.go:89] found id: ""
	I1218 01:51:10.872694 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.872710 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:10.872719 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:10.872731 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:10.932049 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:10.932119 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:11.006112 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:11.006150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:11.021573 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:11.021602 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:11.086764 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:11.086785 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:11.086798 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:13.613916 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:13.625018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:13.625093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:13.651186 1550381 cri.go:89] found id: ""
	I1218 01:51:13.651211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.651220 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:13.651226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:13.651289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:13.680145 1550381 cri.go:89] found id: ""
	I1218 01:51:13.680172 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.680181 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:13.680187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:13.680246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:13.706941 1550381 cri.go:89] found id: ""
	I1218 01:51:13.706970 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.706980 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:13.706986 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:13.707046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:13.735536 1550381 cri.go:89] found id: ""
	I1218 01:51:13.735562 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.735571 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:13.735578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:13.735637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:13.763111 1550381 cri.go:89] found id: ""
	I1218 01:51:13.763185 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.763209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:13.763227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:13.763313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:13.788754 1550381 cri.go:89] found id: ""
	I1218 01:51:13.788779 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.788787 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:13.788794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:13.788883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:13.813966 1550381 cri.go:89] found id: ""
	I1218 01:51:13.813989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.814004 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:13.814010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:13.814068 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:13.838881 1550381 cri.go:89] found id: ""
	I1218 01:51:13.838907 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.838915 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:13.838925 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:13.838936 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:13.869225 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:13.869250 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:13.928878 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:13.928917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:13.955609 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:13.955639 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:14.045680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:14.045710 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:14.045723 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:16.572096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:16.582596 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:16.582666 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:16.606933 1550381 cri.go:89] found id: ""
	I1218 01:51:16.606963 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.606972 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:16.606979 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:16.607038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:16.631960 1550381 cri.go:89] found id: ""
	I1218 01:51:16.631989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.632004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:16.632010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:16.632071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:16.659171 1550381 cri.go:89] found id: ""
	I1218 01:51:16.659198 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.659207 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:16.659213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:16.659269 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:16.689389 1550381 cri.go:89] found id: ""
	I1218 01:51:16.689414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.689422 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:16.689429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:16.689494 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:16.714209 1550381 cri.go:89] found id: ""
	I1218 01:51:16.714236 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.714246 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:16.714252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:16.714311 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:16.739422 1550381 cri.go:89] found id: ""
	I1218 01:51:16.739450 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.739461 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:16.739467 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:16.739529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:16.765164 1550381 cri.go:89] found id: ""
	I1218 01:51:16.765231 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.765256 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:16.765283 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:16.765372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:16.790914 1550381 cri.go:89] found id: ""
	I1218 01:51:16.790990 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.791014 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:16.791035 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:16.791063 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:16.848408 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:16.848446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:16.864121 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:16.864199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:16.967366 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:16.967436 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:16.967463 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:17.008108 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:17.008145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:19.540127 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:19.550917 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:19.550989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:19.574864 1550381 cri.go:89] found id: ""
	I1218 01:51:19.574939 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.574964 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:19.574978 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:19.575059 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:19.605362 1550381 cri.go:89] found id: ""
	I1218 01:51:19.605386 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.605395 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:19.605401 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:19.605465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:19.631747 1550381 cri.go:89] found id: ""
	I1218 01:51:19.631774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.631789 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:19.631795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:19.631870 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:19.656716 1550381 cri.go:89] found id: ""
	I1218 01:51:19.656740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.656749 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:19.656755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:19.656813 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:19.689179 1550381 cri.go:89] found id: ""
	I1218 01:51:19.689206 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.689215 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:19.689221 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:19.689292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:19.713751 1550381 cri.go:89] found id: ""
	I1218 01:51:19.713774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.713783 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:19.713789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:19.713846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:19.737993 1550381 cri.go:89] found id: ""
	I1218 01:51:19.738063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.738074 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:19.738081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:19.738150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:19.763540 1550381 cri.go:89] found id: ""
	I1218 01:51:19.763565 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.763574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:19.763583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:19.763618 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:19.818946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:19.818982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:19.834461 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:19.834487 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:19.932671 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:19.932695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:19.932708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:19.986050 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:19.986085 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:22.530737 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:22.542075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:22.542151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:22.567921 1550381 cri.go:89] found id: ""
	I1218 01:51:22.567945 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.567953 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:22.567960 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:22.568020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:22.595894 1550381 cri.go:89] found id: ""
	I1218 01:51:22.595919 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.595928 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:22.595933 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:22.595991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:22.620929 1550381 cri.go:89] found id: ""
	I1218 01:51:22.620953 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.620968 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:22.620974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:22.621040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:22.646170 1550381 cri.go:89] found id: ""
	I1218 01:51:22.646195 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.646203 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:22.646210 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:22.646270 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:22.675272 1550381 cri.go:89] found id: ""
	I1218 01:51:22.675296 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.675305 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:22.675312 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:22.675376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:22.702994 1550381 cri.go:89] found id: ""
	I1218 01:51:22.703023 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.703033 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:22.703039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:22.703106 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:22.728507 1550381 cri.go:89] found id: ""
	I1218 01:51:22.728533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.728542 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:22.728548 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:22.728608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:22.754134 1550381 cri.go:89] found id: ""
	I1218 01:51:22.754157 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.754165 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:22.754175 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:22.754187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:22.810488 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:22.810539 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:22.826174 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:22.826212 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:22.906393 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:22.906431 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:22.906448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:22.948969 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:22.949025 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:25.504885 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:25.515607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:25.515676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:25.539969 1550381 cri.go:89] found id: ""
	I1218 01:51:25.539994 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.540003 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:25.540010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:25.540076 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:25.565160 1550381 cri.go:89] found id: ""
	I1218 01:51:25.565189 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.565198 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:25.565204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:25.565262 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:25.593521 1550381 cri.go:89] found id: ""
	I1218 01:51:25.593545 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.593554 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:25.593560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:25.593625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:25.618492 1550381 cri.go:89] found id: ""
	I1218 01:51:25.618523 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.618532 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:25.618538 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:25.618600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:25.642784 1550381 cri.go:89] found id: ""
	I1218 01:51:25.642810 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.642819 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:25.642825 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:25.642885 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:25.667732 1550381 cri.go:89] found id: ""
	I1218 01:51:25.667759 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.667768 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:25.667778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:25.667843 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:25.695444 1550381 cri.go:89] found id: ""
	I1218 01:51:25.695468 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.695477 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:25.695483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:25.695540 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:25.720467 1550381 cri.go:89] found id: ""
	I1218 01:51:25.720492 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.720501 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:25.720510 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:25.720522 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:25.777380 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:25.777416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:25.793106 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:25.793135 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:25.859796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:25.859817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:25.859829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:25.885375 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:25.885414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:28.480490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:28.491517 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:28.491587 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:28.528988 1550381 cri.go:89] found id: ""
	I1218 01:51:28.529011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.529020 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:28.529027 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:28.529088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:28.554389 1550381 cri.go:89] found id: ""
	I1218 01:51:28.554415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.554423 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:28.554429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:28.554491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:28.595339 1550381 cri.go:89] found id: ""
	I1218 01:51:28.595365 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.595374 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:28.595380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:28.595440 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:28.620349 1550381 cri.go:89] found id: ""
	I1218 01:51:28.620376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.620384 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:28.620391 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:28.620451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:28.644815 1550381 cri.go:89] found id: ""
	I1218 01:51:28.644844 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.644854 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:28.644862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:28.644923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:28.669719 1550381 cri.go:89] found id: ""
	I1218 01:51:28.669746 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.669755 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:28.669762 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:28.669822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:28.694390 1550381 cri.go:89] found id: ""
	I1218 01:51:28.694415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.694424 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:28.694430 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:28.694491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:28.719213 1550381 cri.go:89] found id: ""
	I1218 01:51:28.719238 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.719247 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:28.719257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:28.719268 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:28.777972 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:28.778010 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:28.792667 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:28.792698 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:28.863732 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:28.863755 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:28.863768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:28.896538 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:28.896571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.484234 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:31.494710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:31.494781 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:31.519036 1550381 cri.go:89] found id: ""
	I1218 01:51:31.519061 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.519070 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:31.519077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:31.519136 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:31.543677 1550381 cri.go:89] found id: ""
	I1218 01:51:31.543702 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.543710 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:31.543717 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:31.543778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:31.570267 1550381 cri.go:89] found id: ""
	I1218 01:51:31.570299 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.570308 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:31.570315 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:31.570406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:31.597988 1550381 cri.go:89] found id: ""
	I1218 01:51:31.598024 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.598034 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:31.598040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:31.598102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:31.625949 1550381 cri.go:89] found id: ""
	I1218 01:51:31.625983 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.625993 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:31.626014 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:31.626097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:31.654833 1550381 cri.go:89] found id: ""
	I1218 01:51:31.654898 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.654923 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:31.654937 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:31.655011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:31.686105 1550381 cri.go:89] found id: ""
	I1218 01:51:31.686132 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.686143 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:31.686149 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:31.686233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:31.711106 1550381 cri.go:89] found id: ""
	I1218 01:51:31.711139 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.711148 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:31.711158 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:31.711187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:31.725923 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:31.725952 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:31.789766 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:31.789789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:31.789801 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:31.815524 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:31.815558 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.843690 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:31.843718 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.403611 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:34.414490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:34.414564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:34.438520 1550381 cri.go:89] found id: ""
	I1218 01:51:34.438544 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.438552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:34.438562 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:34.438625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:34.462603 1550381 cri.go:89] found id: ""
	I1218 01:51:34.462627 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.462636 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:34.462642 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:34.462699 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:34.490371 1550381 cri.go:89] found id: ""
	I1218 01:51:34.490395 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.490404 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:34.490410 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:34.490471 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:34.513456 1550381 cri.go:89] found id: ""
	I1218 01:51:34.513480 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.513488 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:34.513495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:34.513562 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:34.537361 1550381 cri.go:89] found id: ""
	I1218 01:51:34.537385 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.537394 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:34.537407 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:34.537468 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:34.561230 1550381 cri.go:89] found id: ""
	I1218 01:51:34.561253 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.561261 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:34.561268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:34.561348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:34.585180 1550381 cri.go:89] found id: ""
	I1218 01:51:34.585204 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.585212 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:34.585219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:34.585280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:34.609741 1550381 cri.go:89] found id: ""
	I1218 01:51:34.609766 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.609775 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:34.609785 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:34.609802 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.667204 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:34.667238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:34.682240 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:34.682269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:34.745795 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:34.745817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:34.745831 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:34.771222 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:34.771256 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.302139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:37.313213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:37.313316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:37.348873 1550381 cri.go:89] found id: ""
	I1218 01:51:37.348895 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.348903 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:37.348909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:37.348966 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:37.374229 1550381 cri.go:89] found id: ""
	I1218 01:51:37.374256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.374265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:37.374271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:37.374332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:37.398897 1550381 cri.go:89] found id: ""
	I1218 01:51:37.398920 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.398928 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:37.398935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:37.398991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:37.422904 1550381 cri.go:89] found id: ""
	I1218 01:51:37.422930 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.422939 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:37.422946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:37.423010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:37.451168 1550381 cri.go:89] found id: ""
	I1218 01:51:37.451196 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.451205 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:37.451211 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:37.451273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:37.477986 1550381 cri.go:89] found id: ""
	I1218 01:51:37.478011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.478021 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:37.478028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:37.478096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:37.504463 1550381 cri.go:89] found id: ""
	I1218 01:51:37.504487 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.504497 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:37.504503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:37.504563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:37.529381 1550381 cri.go:89] found id: ""
	I1218 01:51:37.529405 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.529414 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:37.529423 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:37.529435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:37.598285 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:37.598307 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:37.598319 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:37.623017 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:37.623052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.654645 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:37.654674 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:37.711304 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:37.711339 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.226741 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:40.238408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:40.238480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:40.263769 1550381 cri.go:89] found id: ""
	I1218 01:51:40.263795 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.263804 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:40.263810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:40.263896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:40.289194 1550381 cri.go:89] found id: ""
	I1218 01:51:40.289220 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.289228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:40.289234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:40.289292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:40.314040 1550381 cri.go:89] found id: ""
	I1218 01:51:40.314064 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.314073 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:40.314079 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:40.314137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:40.339145 1550381 cri.go:89] found id: ""
	I1218 01:51:40.339180 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.339189 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:40.339212 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:40.339293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:40.364902 1550381 cri.go:89] found id: ""
	I1218 01:51:40.364931 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.364940 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:40.364947 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:40.365009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:40.389709 1550381 cri.go:89] found id: ""
	I1218 01:51:40.389730 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.389739 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:40.389745 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:40.389804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:40.414858 1550381 cri.go:89] found id: ""
	I1218 01:51:40.414882 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.414891 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:40.414898 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:40.414958 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:40.441847 1550381 cri.go:89] found id: ""
	I1218 01:51:40.441875 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.441884 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:40.441893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:40.441906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.456791 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:40.456821 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:40.525853 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:40.525876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:40.525889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:40.550993 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:40.551028 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:40.581756 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:40.581786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.139640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:43.166426 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:43.166501 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:43.205967 1550381 cri.go:89] found id: ""
	I1218 01:51:43.206046 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.206071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:43.206091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:43.206223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:43.234922 1550381 cri.go:89] found id: ""
	I1218 01:51:43.234950 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.234958 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:43.234964 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:43.235023 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:43.261353 1550381 cri.go:89] found id: ""
	I1218 01:51:43.261376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.261385 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:43.261392 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:43.261482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:43.286879 1550381 cri.go:89] found id: ""
	I1218 01:51:43.286906 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.286915 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:43.286922 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:43.286982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:43.312530 1550381 cri.go:89] found id: ""
	I1218 01:51:43.312554 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.312568 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:43.312575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:43.312667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:43.337185 1550381 cri.go:89] found id: ""
	I1218 01:51:43.337207 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.337217 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:43.337223 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:43.337280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:43.361707 1550381 cri.go:89] found id: ""
	I1218 01:51:43.361731 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.361741 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:43.361747 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:43.361805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:43.391450 1550381 cri.go:89] found id: ""
	I1218 01:51:43.391483 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.391492 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:43.391502 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:43.391513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.449067 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:43.449104 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:43.464299 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:43.464329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:43.534945 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:43.534968 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:43.534980 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:43.560324 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:43.560357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:46.089618 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:46.100369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:46.100466 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:46.125679 1550381 cri.go:89] found id: ""
	I1218 01:51:46.125705 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.125714 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:46.125722 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:46.125789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:46.187262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.187300 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.187310 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:46.187317 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:46.187376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:46.244106 1550381 cri.go:89] found id: ""
	I1218 01:51:46.244130 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.244139 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:46.244145 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:46.244212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:46.269674 1550381 cri.go:89] found id: ""
	I1218 01:51:46.269740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.269769 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:46.269787 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:46.269876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:46.299177 1550381 cri.go:89] found id: ""
	I1218 01:51:46.299199 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.299209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:46.299215 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:46.299273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:46.328469 1550381 cri.go:89] found id: ""
	I1218 01:51:46.328491 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.328499 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:46.328506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:46.328564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:46.354262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.354288 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.354297 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:46.354304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:46.354362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:46.378724 1550381 cri.go:89] found id: ""
	I1218 01:51:46.378752 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.378761 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:46.378770 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:46.378781 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:46.433721 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:46.433759 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:46.448259 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:46.448295 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:46.511060 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:46.511081 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:46.511093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:46.536601 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:46.536803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.070137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:49.081049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:49.081123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:49.106438 1550381 cri.go:89] found id: ""
	I1218 01:51:49.106465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.106474 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:49.106483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:49.106546 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:49.131233 1550381 cri.go:89] found id: ""
	I1218 01:51:49.131257 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.131265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:49.131272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:49.131337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:49.194204 1550381 cri.go:89] found id: ""
	I1218 01:51:49.194233 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.194242 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:49.194248 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:49.194310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:49.244013 1550381 cri.go:89] found id: ""
	I1218 01:51:49.244039 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.244048 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:49.244054 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:49.244120 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:49.271185 1550381 cri.go:89] found id: ""
	I1218 01:51:49.271211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.271219 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:49.271226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:49.271288 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:49.298143 1550381 cri.go:89] found id: ""
	I1218 01:51:49.298170 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.298180 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:49.298187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:49.298251 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:49.324346 1550381 cri.go:89] found id: ""
	I1218 01:51:49.324374 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.324383 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:49.324389 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:49.324450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:49.350033 1550381 cri.go:89] found id: ""
	I1218 01:51:49.350063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.350072 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:49.350081 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:49.350094 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.382558 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:49.382589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:49.438756 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:49.438795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:49.453736 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:49.453765 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:49.515649 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:49.515672 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:49.515684 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:52.041321 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:52.052329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:52.052403 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:52.082403 1550381 cri.go:89] found id: ""
	I1218 01:51:52.082434 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.082444 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:52.082451 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:52.082513 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:52.108691 1550381 cri.go:89] found id: ""
	I1218 01:51:52.108720 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.108729 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:52.108735 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:52.108795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:52.138279 1550381 cri.go:89] found id: ""
	I1218 01:51:52.138314 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.138323 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:52.138329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:52.138393 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:52.207039 1550381 cri.go:89] found id: ""
	I1218 01:51:52.207067 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.207076 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:52.207083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:52.207150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:52.236007 1550381 cri.go:89] found id: ""
	I1218 01:51:52.236042 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.236052 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:52.236059 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:52.236125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:52.267547 1550381 cri.go:89] found id: ""
	I1218 01:51:52.267583 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.267593 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:52.267599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:52.267668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:52.295275 1550381 cri.go:89] found id: ""
	I1218 01:51:52.295310 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.295320 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:52.295326 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:52.295407 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:52.324187 1550381 cri.go:89] found id: ""
	I1218 01:51:52.324215 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.324224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:52.324234 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:52.324246 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:52.352151 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:52.352182 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:52.408412 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:52.408446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:52.423024 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:52.423098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:52.488577 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:52.488599 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:52.488613 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.015396 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:55.026777 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:55.026851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:55.052687 1550381 cri.go:89] found id: ""
	I1218 01:51:55.052713 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.052722 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:55.052728 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:55.052786 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:55.082492 1550381 cri.go:89] found id: ""
	I1218 01:51:55.082515 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.082524 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:55.082531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:55.082592 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:55.107565 1550381 cri.go:89] found id: ""
	I1218 01:51:55.107592 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.107600 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:55.107607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:55.107674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:55.135213 1550381 cri.go:89] found id: ""
	I1218 01:51:55.135241 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.135249 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:55.135270 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:55.135332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:55.177099 1550381 cri.go:89] found id: ""
	I1218 01:51:55.177128 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.177137 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:55.177143 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:55.177210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:55.224917 1550381 cri.go:89] found id: ""
	I1218 01:51:55.224946 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.224954 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:55.224961 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:55.225020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:55.252438 1550381 cri.go:89] found id: ""
	I1218 01:51:55.252465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.252473 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:55.252479 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:55.252538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:55.277054 1550381 cri.go:89] found id: ""
	I1218 01:51:55.277074 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.277082 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:55.277091 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:55.277106 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:55.292214 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:55.292240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:55.354379 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:55.354401 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:55.354412 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.379112 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:55.379143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:55.407257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:55.407284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:57.964281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:57.975020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:57.975088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:58.005630 1550381 cri.go:89] found id: ""
	I1218 01:51:58.005658 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.005667 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:58.005674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:58.005745 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:58.032296 1550381 cri.go:89] found id: ""
	I1218 01:51:58.032319 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.032329 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:58.032335 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:58.032402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:58.061454 1550381 cri.go:89] found id: ""
	I1218 01:51:58.061479 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.061488 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:58.061495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:58.061554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:58.087783 1550381 cri.go:89] found id: ""
	I1218 01:51:58.087808 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.087817 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:58.087824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:58.087884 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:58.115473 1550381 cri.go:89] found id: ""
	I1218 01:51:58.115496 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.115505 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:58.115512 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:58.115599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:58.152731 1550381 cri.go:89] found id: ""
	I1218 01:51:58.152757 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.152766 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:58.152773 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:58.152832 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:58.207262 1550381 cri.go:89] found id: ""
	I1218 01:51:58.207284 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.207302 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:58.207310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:58.207367 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:58.244074 1550381 cri.go:89] found id: ""
	I1218 01:51:58.244103 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.244112 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:58.244121 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:58.244133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:58.305417 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:58.305455 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:58.320298 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:58.320326 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:58.392177 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:58.392200 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:58.392215 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:58.418264 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:58.418299 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:00.947037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:00.958414 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:00.958504 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:00.982432 1550381 cri.go:89] found id: ""
	I1218 01:52:00.982456 1550381 logs.go:282] 0 containers: []
	W1218 01:52:00.982465 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:00.982472 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:00.982554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:01.011620 1550381 cri.go:89] found id: ""
	I1218 01:52:01.011645 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.011654 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:01.011661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:01.011721 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:01.038538 1550381 cri.go:89] found id: ""
	I1218 01:52:01.038564 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.038572 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:01.038578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:01.038636 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:01.062732 1550381 cri.go:89] found id: ""
	I1218 01:52:01.062758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.062768 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:01.062775 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:01.062836 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:01.088130 1550381 cri.go:89] found id: ""
	I1218 01:52:01.088156 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.088165 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:01.088172 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:01.088241 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:01.116412 1550381 cri.go:89] found id: ""
	I1218 01:52:01.116440 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.116450 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:01.116471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:01.116532 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:01.157710 1550381 cri.go:89] found id: ""
	I1218 01:52:01.157737 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.157747 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:01.157754 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:01.157815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:01.207757 1550381 cri.go:89] found id: ""
	I1218 01:52:01.207784 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.207794 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:01.207803 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:01.207815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:01.293467 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:01.293515 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:01.308790 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:01.308825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:01.377467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:01.377487 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:01.377501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:01.403688 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:01.403722 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:03.936540 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:03.947485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:03.947559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:03.972917 1550381 cri.go:89] found id: ""
	I1218 01:52:03.972939 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.972947 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:03.972953 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:03.973018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:03.997960 1550381 cri.go:89] found id: ""
	I1218 01:52:03.997983 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.997992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:03.997998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:03.998056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:04.027683 1550381 cri.go:89] found id: ""
	I1218 01:52:04.027754 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.027780 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:04.027808 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:04.027916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:04.054769 1550381 cri.go:89] found id: ""
	I1218 01:52:04.054833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.054843 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:04.054849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:04.054917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:04.081260 1550381 cri.go:89] found id: ""
	I1218 01:52:04.081284 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.081293 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:04.081299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:04.081372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:04.106563 1550381 cri.go:89] found id: ""
	I1218 01:52:04.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.106599 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:04.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:04.106667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:04.131682 1550381 cri.go:89] found id: ""
	I1218 01:52:04.131708 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.131717 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:04.131724 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:04.131790 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:04.170215 1550381 cri.go:89] found id: ""
	I1218 01:52:04.170242 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.170251 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:04.170260 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:04.170273 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:04.211169 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:04.211207 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:04.263603 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:04.263636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:04.319257 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:04.319294 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:04.334300 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:04.334329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:04.399992 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:06.900248 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:06.910997 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:06.911067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:06.935514 1550381 cri.go:89] found id: ""
	I1218 01:52:06.935539 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.935548 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:06.935554 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:06.935612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:06.959911 1550381 cri.go:89] found id: ""
	I1218 01:52:06.959933 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.959942 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:06.959949 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:06.960006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:06.989689 1550381 cri.go:89] found id: ""
	I1218 01:52:06.989710 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.989719 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:06.989725 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:06.989783 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:07.016553 1550381 cri.go:89] found id: ""
	I1218 01:52:07.016578 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.016587 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:07.016594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:07.016676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:07.042084 1550381 cri.go:89] found id: ""
	I1218 01:52:07.042106 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.042115 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:07.042121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:07.042179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:07.067075 1550381 cri.go:89] found id: ""
	I1218 01:52:07.067097 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.067107 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:07.067113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:07.067176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:07.096366 1550381 cri.go:89] found id: ""
	I1218 01:52:07.096388 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.096398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:07.096405 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:07.096465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:07.125403 1550381 cri.go:89] found id: ""
	I1218 01:52:07.125426 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.125434 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:07.125444 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:07.125456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:07.146124 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:07.146152 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:07.254257 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:07.254280 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:07.254292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:07.280552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:07.280590 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:07.307796 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:07.307825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:09.873637 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:09.884205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:09.884275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:09.909771 1550381 cri.go:89] found id: ""
	I1218 01:52:09.909796 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.909805 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:09.909812 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:09.909869 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:09.934051 1550381 cri.go:89] found id: ""
	I1218 01:52:09.934082 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.934092 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:09.934098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:09.934161 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:09.964504 1550381 cri.go:89] found id: ""
	I1218 01:52:09.964528 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.964550 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:09.964561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:09.964662 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:09.990501 1550381 cri.go:89] found id: ""
	I1218 01:52:09.990525 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.990534 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:09.990543 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:09.990616 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:10.028312 1550381 cri.go:89] found id: ""
	I1218 01:52:10.028339 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.028348 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:10.028355 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:10.028419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:10.054415 1550381 cri.go:89] found id: ""
	I1218 01:52:10.054443 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.054453 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:10.054460 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:10.054545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:10.085976 1550381 cri.go:89] found id: ""
	I1218 01:52:10.086003 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.086013 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:10.086020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:10.086081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:10.112422 1550381 cri.go:89] found id: ""
	I1218 01:52:10.112455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.112464 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:10.112473 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:10.112485 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:10.214552 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:10.214579 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:10.214591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:10.245834 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:10.245872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:10.278949 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:10.278983 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:10.338117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:10.338153 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:12.853298 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:12.863919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:12.864003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:12.888289 1550381 cri.go:89] found id: ""
	I1218 01:52:12.888315 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.888324 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:12.888330 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:12.888389 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:12.914281 1550381 cri.go:89] found id: ""
	I1218 01:52:12.914306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.914315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:12.914321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:12.914384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:12.941058 1550381 cri.go:89] found id: ""
	I1218 01:52:12.941083 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.941092 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:12.941098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:12.941160 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:12.966998 1550381 cri.go:89] found id: ""
	I1218 01:52:12.967022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.967030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:12.967037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:12.967095 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:12.996005 1550381 cri.go:89] found id: ""
	I1218 01:52:12.996027 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.996036 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:12.996042 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:12.996099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:13.023321 1550381 cri.go:89] found id: ""
	I1218 01:52:13.023345 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.023354 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:13.023360 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:13.023429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:13.049195 1550381 cri.go:89] found id: ""
	I1218 01:52:13.049220 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.049229 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:13.049235 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:13.049295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:13.074787 1550381 cri.go:89] found id: ""
	I1218 01:52:13.074816 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.074825 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:13.074835 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:13.074874 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:13.131893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:13.131926 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:13.159867 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:13.159942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:13.281047 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:13.281070 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:13.281089 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:13.307183 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:13.307217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:15.837707 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:15.848404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:15.848478 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:15.873587 1550381 cri.go:89] found id: ""
	I1218 01:52:15.873615 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.873624 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:15.873630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:15.873689 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:15.897757 1550381 cri.go:89] found id: ""
	I1218 01:52:15.897780 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.897788 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:15.897795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:15.897852 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:15.923098 1550381 cri.go:89] found id: ""
	I1218 01:52:15.923123 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.923132 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:15.923138 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:15.923231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:15.952891 1550381 cri.go:89] found id: ""
	I1218 01:52:15.952921 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.952929 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:15.952935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:15.952991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:15.979178 1550381 cri.go:89] found id: ""
	I1218 01:52:15.979204 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.979212 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:15.979218 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:15.979276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:16.007995 1550381 cri.go:89] found id: ""
	I1218 01:52:16.008022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.008031 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:16.008038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:16.008101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:16.032581 1550381 cri.go:89] found id: ""
	I1218 01:52:16.032607 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.032616 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:16.032641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:16.032709 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:16.058847 1550381 cri.go:89] found id: ""
	I1218 01:52:16.058872 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.058881 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:16.058891 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:16.058902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:16.116382 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:16.116416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:16.131483 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:16.131513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:16.233031 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:16.233053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:16.233066 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:16.262932 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:16.262966 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:18.790616 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:18.801658 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:18.801729 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:18.830076 1550381 cri.go:89] found id: ""
	I1218 01:52:18.830102 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.830112 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:18.830118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:18.830179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:18.855278 1550381 cri.go:89] found id: ""
	I1218 01:52:18.855306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.855315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:18.855321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:18.855380 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:18.886976 1550381 cri.go:89] found id: ""
	I1218 01:52:18.886998 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.887012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:18.887018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:18.887078 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:18.911656 1550381 cri.go:89] found id: ""
	I1218 01:52:18.911678 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.911686 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:18.911692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:18.911750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:18.935981 1550381 cri.go:89] found id: ""
	I1218 01:52:18.936002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.936011 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:18.936017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:18.936074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:18.960773 1550381 cri.go:89] found id: ""
	I1218 01:52:18.960795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.960804 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:18.960811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:18.960871 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:18.985996 1550381 cri.go:89] found id: ""
	I1218 01:52:18.986023 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.986032 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:18.986039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:18.986101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:19.011618 1550381 cri.go:89] found id: ""
	I1218 01:52:19.011696 1550381 logs.go:282] 0 containers: []
	W1218 01:52:19.011719 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:19.011740 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:19.011766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:19.027064 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:19.027093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:19.094483 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:19.094507 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:19.094519 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:19.120053 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:19.120087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:19.190394 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:19.190426 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:21.774413 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:21.785229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:21.785300 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:21.814294 1550381 cri.go:89] found id: ""
	I1218 01:52:21.814316 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.814325 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:21.814331 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:21.814394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:21.840168 1550381 cri.go:89] found id: ""
	I1218 01:52:21.840191 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.840200 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:21.840207 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:21.840267 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:21.865098 1550381 cri.go:89] found id: ""
	I1218 01:52:21.865120 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.865129 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:21.865134 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:21.865198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:21.890513 1550381 cri.go:89] found id: ""
	I1218 01:52:21.890535 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.890543 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:21.890550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:21.890607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:21.915362 1550381 cri.go:89] found id: ""
	I1218 01:52:21.915384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.915393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:21.915399 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:21.915457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:21.941078 1550381 cri.go:89] found id: ""
	I1218 01:52:21.941101 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.941110 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:21.941117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:21.941182 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:21.965276 1550381 cri.go:89] found id: ""
	I1218 01:52:21.965302 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.965311 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:21.965318 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:21.965375 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:21.990348 1550381 cri.go:89] found id: ""
	I1218 01:52:21.990370 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.990378 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:21.990387 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:21.990398 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:22.046097 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:22.046132 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:22.061468 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:22.061498 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:22.129867 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:22.129889 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:22.129901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:22.160943 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:22.160982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:24.703063 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:24.713938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:24.714009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:24.739085 1550381 cri.go:89] found id: ""
	I1218 01:52:24.739167 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.739189 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:24.739209 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:24.739298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:24.763316 1550381 cri.go:89] found id: ""
	I1218 01:52:24.763359 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.763368 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:24.763374 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:24.763443 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:24.789401 1550381 cri.go:89] found id: ""
	I1218 01:52:24.789431 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.789441 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:24.789471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:24.789558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:24.819426 1550381 cri.go:89] found id: ""
	I1218 01:52:24.819458 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.819468 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:24.819474 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:24.819547 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:24.844106 1550381 cri.go:89] found id: ""
	I1218 01:52:24.844143 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.844152 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:24.844159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:24.844230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:24.868116 1550381 cri.go:89] found id: ""
	I1218 01:52:24.868140 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.868149 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:24.868156 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:24.868213 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:24.892247 1550381 cri.go:89] found id: ""
	I1218 01:52:24.892280 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.892289 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:24.892311 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:24.892390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:24.917988 1550381 cri.go:89] found id: ""
	I1218 01:52:24.918013 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.918022 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:24.918031 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:24.918060 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:24.972539 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:24.972571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:24.987364 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:24.987391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:25.066535 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:25.066557 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:25.066572 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:25.093529 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:25.093573 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.627215 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:27.637795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:27.637864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:27.661825 1550381 cri.go:89] found id: ""
	I1218 01:52:27.661850 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.661859 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:27.661866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:27.661931 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:27.688769 1550381 cri.go:89] found id: ""
	I1218 01:52:27.688795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.688803 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:27.688810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:27.688895 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:27.714909 1550381 cri.go:89] found id: ""
	I1218 01:52:27.714992 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.715009 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:27.715017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:27.715080 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:27.742595 1550381 cri.go:89] found id: ""
	I1218 01:52:27.742620 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.742628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:27.742636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:27.742695 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:27.768328 1550381 cri.go:89] found id: ""
	I1218 01:52:27.768353 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.768361 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:27.768368 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:27.768444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:27.794968 1550381 cri.go:89] found id: ""
	I1218 01:52:27.794993 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.795003 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:27.795010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:27.795094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:27.821560 1550381 cri.go:89] found id: ""
	I1218 01:52:27.821587 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.821597 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:27.821603 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:27.821679 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:27.846888 1550381 cri.go:89] found id: ""
	I1218 01:52:27.846912 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.846921 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:27.846930 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:27.846942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:27.861757 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:27.861785 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:27.926373 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:27.926400 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:27.926413 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:27.951763 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:27.951803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.984249 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:27.984278 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.543132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:30.553809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:30.553883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:30.580729 1550381 cri.go:89] found id: ""
	I1218 01:52:30.580758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.580767 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:30.580774 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:30.580837 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:30.611455 1550381 cri.go:89] found id: ""
	I1218 01:52:30.611479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.611488 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:30.611494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:30.611558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:30.637976 1550381 cri.go:89] found id: ""
	I1218 01:52:30.638002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.638025 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:30.638049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:30.638134 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:30.663110 1550381 cri.go:89] found id: ""
	I1218 01:52:30.663135 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.663144 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:30.663150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:30.663211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:30.689367 1550381 cri.go:89] found id: ""
	I1218 01:52:30.689391 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.689401 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:30.689416 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:30.689480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:30.714721 1550381 cri.go:89] found id: ""
	I1218 01:52:30.714747 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.714756 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:30.714764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:30.714826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:30.740391 1550381 cri.go:89] found id: ""
	I1218 01:52:30.740419 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.740428 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:30.740438 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:30.740502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:30.769197 1550381 cri.go:89] found id: ""
	I1218 01:52:30.769264 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.769286 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:30.769306 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:30.769337 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.825762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:30.825799 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:30.840467 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:30.840497 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:30.907063 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:30.907085 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:30.907098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:30.933175 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:30.933208 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.464940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:33.477904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:33.477982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:33.502677 1550381 cri.go:89] found id: ""
	I1218 01:52:33.502703 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.502711 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:33.502718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:33.502778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:33.528314 1550381 cri.go:89] found id: ""
	I1218 01:52:33.528341 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.528350 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:33.528356 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:33.528418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:33.554186 1550381 cri.go:89] found id: ""
	I1218 01:52:33.554213 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.554221 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:33.554227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:33.554286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:33.578717 1550381 cri.go:89] found id: ""
	I1218 01:52:33.578740 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.578751 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:33.578758 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:33.578819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:33.603980 1550381 cri.go:89] found id: ""
	I1218 01:52:33.604054 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.604079 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:33.604098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:33.604287 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:33.629122 1550381 cri.go:89] found id: ""
	I1218 01:52:33.629149 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.629158 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:33.629165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:33.629248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:33.660229 1550381 cri.go:89] found id: ""
	I1218 01:52:33.660266 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.660281 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:33.660288 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:33.660356 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:33.685746 1550381 cri.go:89] found id: ""
	I1218 01:52:33.685812 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.685838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:33.685854 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:33.685866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.717052 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:33.717078 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:33.777106 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:33.777142 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:33.791689 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:33.791719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:33.855601 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:33.855621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:33.855633 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:36.380440 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:36.395133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:36.395206 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:36.463112 1550381 cri.go:89] found id: ""
	I1218 01:52:36.463145 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.463154 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:36.463162 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:36.463235 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:36.489631 1550381 cri.go:89] found id: ""
	I1218 01:52:36.489656 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.489665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:36.489671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:36.489733 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:36.515149 1550381 cri.go:89] found id: ""
	I1218 01:52:36.515175 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.515186 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:36.515192 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:36.515253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:36.543702 1550381 cri.go:89] found id: ""
	I1218 01:52:36.543727 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.543736 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:36.543743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:36.543802 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:36.568359 1550381 cri.go:89] found id: ""
	I1218 01:52:36.568384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.568393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:36.568400 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:36.568457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:36.591933 1550381 cri.go:89] found id: ""
	I1218 01:52:36.591959 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.591968 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:36.591974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:36.592033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:36.619454 1550381 cri.go:89] found id: ""
	I1218 01:52:36.619479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.619488 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:36.619494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:36.619552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:36.644231 1550381 cri.go:89] found id: ""
	I1218 01:52:36.644256 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.644265 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:36.644274 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:36.644286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:36.673981 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:36.674008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:36.730614 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:36.730648 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:36.745581 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:36.745614 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:36.808564 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:36.808591 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:36.808604 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.334388 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:39.345831 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:39.345904 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:39.374463 1550381 cri.go:89] found id: ""
	I1218 01:52:39.374486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.374495 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:39.374501 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:39.374567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:39.439153 1550381 cri.go:89] found id: ""
	I1218 01:52:39.439178 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.439187 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:39.439196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:39.439255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:39.483631 1550381 cri.go:89] found id: ""
	I1218 01:52:39.483655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.483664 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:39.483670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:39.483746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:39.513656 1550381 cri.go:89] found id: ""
	I1218 01:52:39.513681 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.513689 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:39.513695 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:39.513757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:39.538364 1550381 cri.go:89] found id: ""
	I1218 01:52:39.538389 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.538397 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:39.538404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:39.538469 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:39.562963 1550381 cri.go:89] found id: ""
	I1218 01:52:39.562989 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.562997 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:39.563004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:39.563063 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:39.590225 1550381 cri.go:89] found id: ""
	I1218 01:52:39.590247 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.590255 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:39.590261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:39.590317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:39.619590 1550381 cri.go:89] found id: ""
	I1218 01:52:39.619613 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.619622 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:39.619631 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:39.619642 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.645098 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:39.645133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:39.675338 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:39.675370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:39.731953 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:39.731988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:39.746929 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:39.746957 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:39.815336 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.315631 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:42.327549 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:42.327635 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:42.355093 1550381 cri.go:89] found id: ""
	I1218 01:52:42.355117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.355126 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:42.355133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:42.355193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:42.383724 1550381 cri.go:89] found id: ""
	I1218 01:52:42.383746 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.383755 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:42.383763 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:42.383822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:42.439728 1550381 cri.go:89] found id: ""
	I1218 01:52:42.439752 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.439761 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:42.439767 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:42.439826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:42.485723 1550381 cri.go:89] found id: ""
	I1218 01:52:42.485751 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.485760 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:42.485766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:42.485835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:42.518003 1550381 cri.go:89] found id: ""
	I1218 01:52:42.518030 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.518040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:42.518046 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:42.518105 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:42.542509 1550381 cri.go:89] found id: ""
	I1218 01:52:42.542534 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.542543 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:42.542550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:42.542608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:42.567103 1550381 cri.go:89] found id: ""
	I1218 01:52:42.567127 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.567135 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:42.567144 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:42.567210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:42.591556 1550381 cri.go:89] found id: ""
	I1218 01:52:42.591623 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.591648 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:42.591670 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:42.591708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:42.622840 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:42.622867 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:42.677917 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:42.677950 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:42.692666 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:42.692699 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:42.765474 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.765497 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:42.765509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.291290 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:45.308807 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:45.308972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:45.342117 1550381 cri.go:89] found id: ""
	I1218 01:52:45.342151 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.342160 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:45.342168 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:45.342233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:45.370490 1550381 cri.go:89] found id: ""
	I1218 01:52:45.370516 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.370525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:45.370531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:45.370612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:45.416227 1550381 cri.go:89] found id: ""
	I1218 01:52:45.416262 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.416272 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:45.416278 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:45.416359 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:45.475986 1550381 cri.go:89] found id: ""
	I1218 01:52:45.476010 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.476019 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:45.476026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:45.476089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:45.505307 1550381 cri.go:89] found id: ""
	I1218 01:52:45.505375 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.505400 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:45.505419 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:45.505520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:45.531649 1550381 cri.go:89] found id: ""
	I1218 01:52:45.531676 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.531685 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:45.531691 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:45.531762 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:45.557231 1550381 cri.go:89] found id: ""
	I1218 01:52:45.557258 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.557268 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:45.557274 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:45.557332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:45.581819 1550381 cri.go:89] found id: ""
	I1218 01:52:45.581846 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.581855 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:45.581864 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:45.581876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:45.637946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:45.637982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:45.653092 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:45.653127 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:45.733673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:45.733695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:45.733708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.759208 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:45.759243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:48.291278 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:48.302161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:48.302234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:48.326549 1550381 cri.go:89] found id: ""
	I1218 01:52:48.326572 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.326580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:48.326587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:48.326647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:48.355829 1550381 cri.go:89] found id: ""
	I1218 01:52:48.355853 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.355863 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:48.355869 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:48.355927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:48.384367 1550381 cri.go:89] found id: ""
	I1218 01:52:48.384404 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.384414 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:48.384421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:48.384495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:48.440457 1550381 cri.go:89] found id: ""
	I1218 01:52:48.440486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.440495 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:48.440502 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:48.440572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:48.484538 1550381 cri.go:89] found id: ""
	I1218 01:52:48.484565 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.484574 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:48.484580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:48.484671 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:48.517629 1550381 cri.go:89] found id: ""
	I1218 01:52:48.517655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.517664 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:48.517670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:48.517727 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:48.544213 1550381 cri.go:89] found id: ""
	I1218 01:52:48.544250 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.544259 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:48.544268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:48.544338 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:48.571178 1550381 cri.go:89] found id: ""
	I1218 01:52:48.571214 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.571224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:48.571233 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:48.571244 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:48.629108 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:48.629154 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:48.644078 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:48.644105 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:48.710322 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:48.710345 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:48.710357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:48.735873 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:48.735908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:51.264224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:51.274867 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:51.274936 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:51.302544 1550381 cri.go:89] found id: ""
	I1218 01:52:51.302574 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.302582 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:51.302591 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:51.302650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:51.326887 1550381 cri.go:89] found id: ""
	I1218 01:52:51.326920 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.326929 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:51.326935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:51.326996 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:51.355805 1550381 cri.go:89] found id: ""
	I1218 01:52:51.355833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.355842 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:51.355849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:51.355910 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:51.385402 1550381 cri.go:89] found id: ""
	I1218 01:52:51.385475 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.385502 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:51.385516 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:51.385597 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:51.429600 1550381 cri.go:89] found id: ""
	I1218 01:52:51.429679 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.429705 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:51.429723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:51.429795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:51.482295 1550381 cri.go:89] found id: ""
	I1218 01:52:51.482362 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.482386 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:51.482406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:51.482483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:51.509210 1550381 cri.go:89] found id: ""
	I1218 01:52:51.509282 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.509307 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:51.509319 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:51.509392 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:51.534258 1550381 cri.go:89] found id: ""
	I1218 01:52:51.534335 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.534359 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:51.534374 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:51.534399 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:51.590233 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:51.590266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:51.604772 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:51.604807 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:51.669210 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:51.669233 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:51.669245 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:51.694168 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:51.694201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:54.225084 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:54.235834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:54.235909 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:54.263169 1550381 cri.go:89] found id: ""
	I1218 01:52:54.263202 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.263212 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:54.263219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:54.263286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:54.288775 1550381 cri.go:89] found id: ""
	I1218 01:52:54.288801 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.288812 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:54.288818 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:54.288881 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:54.313424 1550381 cri.go:89] found id: ""
	I1218 01:52:54.313455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.313463 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:54.313470 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:54.313545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:54.337557 1550381 cri.go:89] found id: ""
	I1218 01:52:54.337586 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.337595 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:54.337604 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:54.337660 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:54.362944 1550381 cri.go:89] found id: ""
	I1218 01:52:54.362968 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.362976 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:54.362983 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:54.363055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:54.405526 1550381 cri.go:89] found id: ""
	I1218 01:52:54.405546 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.405554 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:54.405560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:54.405617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:54.470952 1550381 cri.go:89] found id: ""
	I1218 01:52:54.470975 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.470983 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:54.470995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:54.471051 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:54.499299 1550381 cri.go:89] found id: ""
	I1218 01:52:54.499324 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.499332 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:54.499341 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:54.499352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:54.554755 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:54.554791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:54.569411 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:54.569439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:54.630717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:54.630737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:54.630751 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:54.656160 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:54.656197 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.184460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:57.195292 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:57.195360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:57.220784 1550381 cri.go:89] found id: ""
	I1218 01:52:57.220821 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.220831 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:57.220837 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:57.220911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:57.245470 1550381 cri.go:89] found id: ""
	I1218 01:52:57.245493 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.245501 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:57.245508 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:57.245572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:57.271053 1550381 cri.go:89] found id: ""
	I1218 01:52:57.271076 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.271084 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:57.271091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:57.271149 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:57.297094 1550381 cri.go:89] found id: ""
	I1218 01:52:57.297117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.297125 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:57.297132 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:57.297189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:57.321869 1550381 cri.go:89] found id: ""
	I1218 01:52:57.321903 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.321913 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:57.321919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:57.321980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:57.346700 1550381 cri.go:89] found id: ""
	I1218 01:52:57.346726 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.346736 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:57.346743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:57.346804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:57.371462 1550381 cri.go:89] found id: ""
	I1218 01:52:57.371487 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.371496 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:57.371503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:57.371561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:57.408706 1550381 cri.go:89] found id: ""
	I1218 01:52:57.408725 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.408733 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:57.408742 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:57.408754 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:57.518131 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:57.518152 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:57.518165 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:57.544836 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:57.544872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.572743 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:57.572782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:57.635526 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:57.635567 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.150459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:00.169757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:00.169839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:00.240442 1550381 cri.go:89] found id: ""
	I1218 01:53:00.240472 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.240482 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:00.240489 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:00.240568 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:00.297137 1550381 cri.go:89] found id: ""
	I1218 01:53:00.297224 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.297243 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:00.297253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:00.297363 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:00.336217 1550381 cri.go:89] found id: ""
	I1218 01:53:00.336242 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.336251 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:00.336259 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:00.336333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:00.365991 1550381 cri.go:89] found id: ""
	I1218 01:53:00.366020 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.366030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:00.366037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:00.366107 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:00.425076 1550381 cri.go:89] found id: ""
	I1218 01:53:00.425152 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.425177 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:00.425198 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:00.425310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:00.464180 1550381 cri.go:89] found id: ""
	I1218 01:53:00.464259 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.464291 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:00.464313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:00.464419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:00.498012 1550381 cri.go:89] found id: ""
	I1218 01:53:00.498088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.498112 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:00.498133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:00.498248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:00.526153 1550381 cri.go:89] found id: ""
	I1218 01:53:00.526228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.526250 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:00.526271 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:00.526313 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:00.581384 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:00.581418 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.596391 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:00.596467 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:00.665518 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:00.665541 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:00.665554 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:00.691014 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:00.691052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:03.221071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:03.232071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:03.232143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:03.256975 1550381 cri.go:89] found id: ""
	I1218 01:53:03.256998 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.257006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:03.257012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:03.257070 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:03.286981 1550381 cri.go:89] found id: ""
	I1218 01:53:03.287006 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.287021 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:03.287028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:03.287089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:03.315833 1550381 cri.go:89] found id: ""
	I1218 01:53:03.315858 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.315867 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:03.315873 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:03.315935 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:03.343588 1550381 cri.go:89] found id: ""
	I1218 01:53:03.343611 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.343619 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:03.343626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:03.343684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:03.369440 1550381 cri.go:89] found id: ""
	I1218 01:53:03.369469 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.369478 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:03.369485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:03.369545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:03.428115 1550381 cri.go:89] found id: ""
	I1218 01:53:03.428138 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.428147 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:03.428154 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:03.428211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:03.484823 1550381 cri.go:89] found id: ""
	I1218 01:53:03.484847 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.484856 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:03.484862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:03.484920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:03.512094 1550381 cri.go:89] found id: ""
	I1218 01:53:03.512119 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.512128 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:03.512139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:03.512150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:03.568376 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:03.568411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:03.583603 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:03.583632 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:03.651107 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:03.651129 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:03.651143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:03.676088 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:03.676125 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.206266 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:06.217464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:06.217558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:06.242745 1550381 cri.go:89] found id: ""
	I1218 01:53:06.242770 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.242779 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:06.242786 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:06.242846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:06.267735 1550381 cri.go:89] found id: ""
	I1218 01:53:06.267757 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.267765 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:06.267771 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:06.267834 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:06.297274 1550381 cri.go:89] found id: ""
	I1218 01:53:06.297297 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.297306 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:06.297313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:06.297372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:06.326794 1550381 cri.go:89] found id: ""
	I1218 01:53:06.326820 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.326829 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:06.326835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:06.326893 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:06.351519 1550381 cri.go:89] found id: ""
	I1218 01:53:06.351543 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.351552 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:06.351558 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:06.351617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:06.378499 1550381 cri.go:89] found id: ""
	I1218 01:53:06.378525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.378534 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:06.378540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:06.378598 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:06.414203 1550381 cri.go:89] found id: ""
	I1218 01:53:06.414236 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.414246 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:06.414252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:06.414316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:06.493089 1550381 cri.go:89] found id: ""
	I1218 01:53:06.493116 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.493125 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:06.493134 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:06.493147 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.522114 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:06.522145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:06.578855 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:06.578891 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:06.594005 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:06.594033 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:06.658779 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:06.658800 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:06.658814 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.183921 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:09.194857 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:09.194928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:09.218740 1550381 cri.go:89] found id: ""
	I1218 01:53:09.218764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.218772 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:09.218778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:09.218835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:09.243853 1550381 cri.go:89] found id: ""
	I1218 01:53:09.243879 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.243888 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:09.243894 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:09.243954 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:09.269591 1550381 cri.go:89] found id: ""
	I1218 01:53:09.269615 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.269624 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:09.269630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:09.269691 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:09.299082 1550381 cri.go:89] found id: ""
	I1218 01:53:09.299120 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.299129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:09.299136 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:09.299207 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:09.324088 1550381 cri.go:89] found id: ""
	I1218 01:53:09.324121 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.324131 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:09.324137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:09.324203 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:09.348898 1550381 cri.go:89] found id: ""
	I1218 01:53:09.348921 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.348930 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:09.348936 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:09.348997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:09.374245 1550381 cri.go:89] found id: ""
	I1218 01:53:09.374268 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.374279 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:09.374286 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:09.374346 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:09.413630 1550381 cri.go:89] found id: ""
	I1218 01:53:09.413653 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.413662 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:09.413672 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:09.413689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:09.474660 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:09.474685 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:09.541382 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:09.541403 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:09.541416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.566761 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:09.566792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:09.593984 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:09.594011 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.149658 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:12.160130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:12.160258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:12.185266 1550381 cri.go:89] found id: ""
	I1218 01:53:12.185339 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.185356 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:12.185363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:12.185434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:12.212092 1550381 cri.go:89] found id: ""
	I1218 01:53:12.212124 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.212133 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:12.212139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:12.212205 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:12.235977 1550381 cri.go:89] found id: ""
	I1218 01:53:12.236009 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.236018 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:12.236024 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:12.236091 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:12.260037 1550381 cri.go:89] found id: ""
	I1218 01:53:12.260069 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.260079 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:12.260085 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:12.260151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:12.285034 1550381 cri.go:89] found id: ""
	I1218 01:53:12.285060 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.285069 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:12.285075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:12.285142 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:12.309185 1550381 cri.go:89] found id: ""
	I1218 01:53:12.309221 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.309231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:12.309256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:12.309330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:12.333588 1550381 cri.go:89] found id: ""
	I1218 01:53:12.333613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.333622 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:12.333629 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:12.333697 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:12.362204 1550381 cri.go:89] found id: ""
	I1218 01:53:12.362228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.362237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:12.362246 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:12.362292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.427192 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:12.431443 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:12.465023 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:12.465048 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:12.534431 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:12.534453 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:12.534465 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:12.560311 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:12.560349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:15.088443 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:15.100075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:15.100170 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:15.126386 1550381 cri.go:89] found id: ""
	I1218 01:53:15.126410 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.126419 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:15.126425 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:15.126493 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:15.152426 1550381 cri.go:89] found id: ""
	I1218 01:53:15.152450 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.152459 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:15.152466 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:15.152529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:15.178155 1550381 cri.go:89] found id: ""
	I1218 01:53:15.178184 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.178193 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:15.178199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:15.178263 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:15.203664 1550381 cri.go:89] found id: ""
	I1218 01:53:15.203687 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.203696 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:15.203703 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:15.203767 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:15.228792 1550381 cri.go:89] found id: ""
	I1218 01:53:15.228815 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.228823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:15.228830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:15.228891 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:15.257550 1550381 cri.go:89] found id: ""
	I1218 01:53:15.257575 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.257585 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:15.257594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:15.257656 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:15.283324 1550381 cri.go:89] found id: ""
	I1218 01:53:15.283350 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.283359 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:15.283365 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:15.283430 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:15.311422 1550381 cri.go:89] found id: ""
	I1218 01:53:15.311455 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.311465 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:15.311474 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:15.311486 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:15.367419 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:15.367456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:15.382340 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:15.382370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:15.500526 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:15.500551 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:15.500563 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:15.527154 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:15.527190 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:18.057588 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:18.068726 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:18.068799 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:18.096722 1550381 cri.go:89] found id: ""
	I1218 01:53:18.096859 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.096895 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:18.096919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:18.097001 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:18.121827 1550381 cri.go:89] found id: ""
	I1218 01:53:18.121851 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.121860 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:18.121866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:18.121932 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:18.146993 1550381 cri.go:89] found id: ""
	I1218 01:53:18.147018 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.147028 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:18.147034 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:18.147094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:18.171236 1550381 cri.go:89] found id: ""
	I1218 01:53:18.171258 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.171266 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:18.171272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:18.171333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:18.199330 1550381 cri.go:89] found id: ""
	I1218 01:53:18.199355 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.199367 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:18.199373 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:18.199432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:18.225625 1550381 cri.go:89] found id: ""
	I1218 01:53:18.225649 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.225659 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:18.225666 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:18.225746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:18.250702 1550381 cri.go:89] found id: ""
	I1218 01:53:18.250725 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.250734 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:18.250741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:18.250854 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:18.276500 1550381 cri.go:89] found id: ""
	I1218 01:53:18.276525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.276534 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:18.276543 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:18.276559 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:18.333753 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:18.333788 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:18.350466 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:18.350520 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:18.431435 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:18.431467 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:18.431480 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:18.463849 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:18.463889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:21.008824 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:21.019970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:21.020040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:21.044583 1550381 cri.go:89] found id: ""
	I1218 01:53:21.044607 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.044616 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:21.044641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:21.044701 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:21.069261 1550381 cri.go:89] found id: ""
	I1218 01:53:21.069286 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.069295 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:21.069301 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:21.069360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:21.099196 1550381 cri.go:89] found id: ""
	I1218 01:53:21.099219 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.099228 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:21.099234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:21.099298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:21.124519 1550381 cri.go:89] found id: ""
	I1218 01:53:21.124541 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.124550 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:21.124556 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:21.124707 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:21.153447 1550381 cri.go:89] found id: ""
	I1218 01:53:21.153474 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.153483 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:21.153503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:21.153561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:21.178670 1550381 cri.go:89] found id: ""
	I1218 01:53:21.178694 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.178702 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:21.178709 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:21.178770 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:21.207919 1550381 cri.go:89] found id: ""
	I1218 01:53:21.207944 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.207953 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:21.207959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:21.208017 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:21.232478 1550381 cri.go:89] found id: ""
	I1218 01:53:21.232503 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.232512 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:21.232521 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:21.232533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:21.287757 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:21.287789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:21.302312 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:21.302349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:21.366377 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:21.366399 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:21.366411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:21.393029 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:21.393110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:23.948667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:23.959340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:23.959436 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:23.986999 1550381 cri.go:89] found id: ""
	I1218 01:53:23.987024 1550381 logs.go:282] 0 containers: []
	W1218 01:53:23.987033 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:23.987040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:23.987103 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:24.020720 1550381 cri.go:89] found id: ""
	I1218 01:53:24.020799 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.020833 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:24.020846 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:24.020920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:24.047235 1550381 cri.go:89] found id: ""
	I1218 01:53:24.047267 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.047283 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:24.047299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:24.047373 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:24.080575 1550381 cri.go:89] found id: ""
	I1218 01:53:24.080599 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.080608 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:24.080615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:24.080706 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:24.105557 1550381 cri.go:89] found id: ""
	I1218 01:53:24.105585 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.105595 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:24.105601 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:24.105661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:24.130738 1550381 cri.go:89] found id: ""
	I1218 01:53:24.130764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.130773 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:24.130779 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:24.130839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:24.159061 1550381 cri.go:89] found id: ""
	I1218 01:53:24.159088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.159097 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:24.159104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:24.159166 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:24.187647 1550381 cri.go:89] found id: ""
	I1218 01:53:24.187674 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.187684 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:24.187694 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:24.187704 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:24.242513 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:24.242544 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:24.257316 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:24.257396 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:24.320000 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:24.320020 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:24.320037 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:24.346099 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:24.346136 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:26.873531 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:26.885238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:26.885314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:26.910216 1550381 cri.go:89] found id: ""
	I1218 01:53:26.910239 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.910247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:26.910253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:26.910313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:26.933448 1550381 cri.go:89] found id: ""
	I1218 01:53:26.933475 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.933484 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:26.933490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:26.933553 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:26.957855 1550381 cri.go:89] found id: ""
	I1218 01:53:26.957888 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.957897 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:26.957904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:26.957979 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:26.982293 1550381 cri.go:89] found id: ""
	I1218 01:53:26.982357 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.982373 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:26.982380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:26.982445 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:27.008361 1550381 cri.go:89] found id: ""
	I1218 01:53:27.008398 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.008408 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:27.008415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:27.008475 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:27.037587 1550381 cri.go:89] found id: ""
	I1218 01:53:27.037613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.037622 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:27.037628 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:27.037686 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:27.065312 1550381 cri.go:89] found id: ""
	I1218 01:53:27.065376 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.065401 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:27.065423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:27.065510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:27.090401 1550381 cri.go:89] found id: ""
	I1218 01:53:27.090427 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.090435 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:27.090445 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:27.090457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:27.105745 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:27.105773 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:27.166883 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:27.166902 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:27.166917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:27.192695 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:27.192732 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:27.224139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:27.224167 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:29.783401 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:29.794627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:29.794738 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:29.819835 1550381 cri.go:89] found id: ""
	I1218 01:53:29.819862 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.819872 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:29.819879 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:29.819939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:29.844881 1550381 cri.go:89] found id: ""
	I1218 01:53:29.844910 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.844919 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:29.844925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:29.844986 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:29.869995 1550381 cri.go:89] found id: ""
	I1218 01:53:29.870023 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.870032 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:29.870038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:29.870100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:29.895647 1550381 cri.go:89] found id: ""
	I1218 01:53:29.895671 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.895681 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:29.895687 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:29.895746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:29.922749 1550381 cri.go:89] found id: ""
	I1218 01:53:29.922773 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.922782 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:29.922788 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:29.922847 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:29.948026 1550381 cri.go:89] found id: ""
	I1218 01:53:29.948052 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.948061 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:29.948071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:29.948129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:29.974575 1550381 cri.go:89] found id: ""
	I1218 01:53:29.974598 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.974607 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:29.974614 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:29.974673 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:30.004723 1550381 cri.go:89] found id: ""
	I1218 01:53:30.004807 1550381 logs.go:282] 0 containers: []
	W1218 01:53:30.004831 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:30.004861 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:30.004908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:30.103939 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:30.103976 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:30.120775 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:30.120815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:30.191673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:30.191695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:30.191707 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:30.218142 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:30.218175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:32.750923 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:32.764019 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:32.764089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:32.789861 1550381 cri.go:89] found id: ""
	I1218 01:53:32.789885 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.789894 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:32.789900 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:32.789967 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:32.821480 1550381 cri.go:89] found id: ""
	I1218 01:53:32.821513 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.821525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:32.821532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:32.821601 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:32.847702 1550381 cri.go:89] found id: ""
	I1218 01:53:32.847733 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.847744 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:32.847751 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:32.847811 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:32.872820 1550381 cri.go:89] found id: ""
	I1218 01:53:32.872845 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.872855 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:32.872861 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:32.872976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:32.901902 1550381 cri.go:89] found id: ""
	I1218 01:53:32.901975 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.902012 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:32.902020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:32.902100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:32.926991 1550381 cri.go:89] found id: ""
	I1218 01:53:32.927016 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.927024 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:32.927031 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:32.927093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:32.951930 1550381 cri.go:89] found id: ""
	I1218 01:53:32.951957 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.951966 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:32.951972 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:32.952034 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:32.977838 1550381 cri.go:89] found id: ""
	I1218 01:53:32.977864 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.977874 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:32.977883 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:32.977894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:33.047486 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:33.047516 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:33.047530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:33.074046 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:33.074084 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:33.106481 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:33.106509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:33.164051 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:33.164095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:35.679393 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:35.706090 1550381 out.go:203] 
	W1218 01:53:35.709129 1550381 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1218 01:53:35.709179 1550381 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1218 01:53:35.709189 1550381 out.go:285] * Related issues:
	* Related issues:
	W1218 01:53:35.709204 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1218 01:53:35.709225 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1218 01:53:35.712031 1550381 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-120615
helpers_test.go:244: (dbg) docker inspect newest-cni-120615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	        "Created": "2025-12-18T01:37:46.267734033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1550552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:47:25.795117457Z",
	            "FinishedAt": "2025-12-18T01:47:24.299442993Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hosts",
	        "LogPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1-json.log",
	        "Name": "/newest-cni-120615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-120615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-120615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	                "LowerDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-120615",
	                "Source": "/var/lib/docker/volumes/newest-cni-120615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-120615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-120615",
	                "name.minikube.sigs.k8s.io": "newest-cni-120615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03d6121fa7465afe54c6849e5d9912cbd0edd591438a044dd295828487da20b2",
	            "SandboxKey": "/var/run/docker/netns/03d6121fa746",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-120615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:76:51:cf:bd:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3561ba231e6c48a625724c6039bb103aabf4482d7db78bad659da0b08d445469",
	                    "EndpointID": "94d026911af52030bc96754a63e0334f51dcbb249930773e615cdc9fb74f4e43",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-120615",
	                        "dd9cd12a762d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (367.696274ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25: (1.578807834s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ image   │ embed-certs-922343 image list --format=json                                                                                                                                                                                                              │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ pause   │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ unpause │ -p embed-certs-922343 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	│ stop    │ -p no-preload-970975 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ addons  │ enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ start   │ -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-120615 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-120615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:47:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:47:25.355718 1550381 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:47:25.355915 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.355941 1550381 out.go:374] Setting ErrFile to fd 2...
	I1218 01:47:25.355960 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.356345 1550381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:47:25.356861 1550381 out.go:368] Setting JSON to false
	I1218 01:47:25.358213 1550381 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30592,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:47:25.358285 1550381 start.go:143] virtualization:  
	I1218 01:47:25.361184 1550381 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:47:25.364947 1550381 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:47:25.365006 1550381 notify.go:221] Checking for updates...
	I1218 01:47:25.370797 1550381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:47:25.373705 1550381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:25.376399 1550381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:47:25.379145 1550381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:47:25.381925 1550381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1218 01:47:23.895415 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:47:25.400717 1542458 node_ready.go:38] duration metric: took 6m0.00576723s for node "no-preload-970975" to be "Ready" ...
	I1218 01:47:25.403890 1542458 out.go:203] 
	W1218 01:47:25.406708 1542458 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 01:47:25.406730 1542458 out.go:285] * 
	W1218 01:47:25.413144 1542458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:47:25.416224 1542458 out.go:203] 
	I1218 01:47:25.385246 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:25.385825 1550381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:47:25.416975 1550381 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:47:25.417132 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.547941 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.531353346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.548100 1550381 docker.go:319] overlay module found
	I1218 01:47:25.551414 1550381 out.go:179] * Using the docker driver based on existing profile
	I1218 01:47:25.554261 1550381 start.go:309] selected driver: docker
	I1218 01:47:25.554288 1550381 start.go:927] validating driver "docker" against &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.554406 1550381 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:47:25.555118 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.640875 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.630200713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.641222 1550381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:47:25.641258 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:25.641307 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:25.641353 1550381 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.647668 1550381 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:47:25.650778 1550381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:47:25.654776 1550381 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:47:25.657861 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:25.657921 1550381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:47:25.657930 1550381 cache.go:65] Caching tarball of preloaded images
	I1218 01:47:25.658010 1550381 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:47:25.658022 1550381 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:47:25.658128 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:25.658345 1550381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:47:25.717764 1550381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:47:25.717789 1550381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:47:25.717804 1550381 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:47:25.717832 1550381 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:47:25.717885 1550381 start.go:364] duration metric: took 36.159µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:47:25.717905 1550381 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:47:25.717910 1550381 fix.go:54] fixHost starting: 
	I1218 01:47:25.718174 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:25.745308 1550381 fix.go:112] recreateIfNeeded on newest-cni-120615: state=Stopped err=<nil>
	W1218 01:47:25.745341 1550381 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:47:25.748580 1550381 out.go:252] * Restarting existing docker container for "newest-cni-120615" ...
	I1218 01:47:25.748689 1550381 cli_runner.go:164] Run: docker start newest-cni-120615
	I1218 01:47:26.093744 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:26.142570 1550381 kic.go:430] container "newest-cni-120615" state is running.
	I1218 01:47:26.143025 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:26.185359 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:26.185574 1550381 machine.go:94] provisionDockerMachine start ...
	I1218 01:47:26.185645 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:26.213286 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:26.213626 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:26.213647 1550381 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:47:26.214251 1550381 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51806->127.0.0.1:34217: read: connection reset by peer
	I1218 01:47:29.372266 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.372355 1550381 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:47:29.372452 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.391771 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.392072 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.392083 1550381 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:47:29.561538 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.561625 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.579579 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.579890 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.579907 1550381 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:47:29.737159 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:47:29.737184 1550381 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:47:29.737219 1550381 ubuntu.go:190] setting up certificates
	I1218 01:47:29.737230 1550381 provision.go:84] configureAuth start
	I1218 01:47:29.737295 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:29.756140 1550381 provision.go:143] copyHostCerts
	I1218 01:47:29.756217 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:47:29.756227 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:47:29.756310 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:47:29.756403 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:47:29.756408 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:47:29.756436 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:47:29.756487 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:47:29.756491 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:47:29.756514 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:47:29.756559 1550381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:47:30.464419 1550381 provision.go:177] copyRemoteCerts
	I1218 01:47:30.464487 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:47:30.464527 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.482395 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.589769 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:47:30.608046 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:47:30.627105 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:47:30.645433 1550381 provision.go:87] duration metric: took 908.179647ms to configureAuth
	I1218 01:47:30.645503 1550381 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:47:30.645738 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:30.645753 1550381 machine.go:97] duration metric: took 4.460171667s to provisionDockerMachine
	I1218 01:47:30.645761 1550381 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:47:30.645773 1550381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:47:30.645828 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:47:30.645876 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.663527 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.774279 1550381 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:47:30.777807 1550381 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:47:30.777838 1550381 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:47:30.777851 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:47:30.777919 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:47:30.778044 1550381 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:47:30.778177 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:47:30.786077 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:30.804331 1550381 start.go:296] duration metric: took 158.553882ms for postStartSetup
	I1218 01:47:30.804411 1550381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:47:30.804450 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.822410 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.925924 1550381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:47:30.931214 1550381 fix.go:56] duration metric: took 5.213296131s for fixHost
	I1218 01:47:30.931236 1550381 start.go:83] releasing machines lock for "newest-cni-120615", held for 5.213342998s
	I1218 01:47:30.931301 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:30.952534 1550381 ssh_runner.go:195] Run: cat /version.json
	I1218 01:47:30.952560 1550381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:47:30.952584 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.952698 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.969636 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.973480 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:31.167774 1550381 ssh_runner.go:195] Run: systemctl --version
	I1218 01:47:31.174874 1550381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:47:31.179507 1550381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:47:31.179587 1550381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:47:31.187709 1550381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:47:31.187739 1550381 start.go:496] detecting cgroup driver to use...
	I1218 01:47:31.187790 1550381 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:47:31.187842 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:47:31.205437 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:47:31.218917 1550381 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:47:31.218989 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:47:31.234859 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:47:31.247863 1550381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:47:31.361666 1550381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:47:31.478401 1550381 docker.go:234] disabling docker service ...
	I1218 01:47:31.478516 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:47:31.493181 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:47:31.506484 1550381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:47:31.622932 1550381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:47:31.755398 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:47:31.768148 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:47:31.786320 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:47:31.795518 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:47:31.804506 1550381 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:47:31.804591 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:47:31.814205 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.823037 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:47:31.832187 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.841421 1550381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:47:31.849663 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:47:31.858543 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:47:31.867324 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:47:31.878120 1550381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:47:31.886565 1550381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:47:31.894226 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.000205 1550381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:47:32.119373 1550381 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:47:32.119494 1550381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:47:32.123705 1550381 start.go:564] Will wait 60s for crictl version
	I1218 01:47:32.123796 1550381 ssh_runner.go:195] Run: which crictl
	I1218 01:47:32.127736 1550381 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:47:32.151646 1550381 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:47:32.151742 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.171630 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.197786 1550381 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:47:32.200756 1550381 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:47:32.216905 1550381 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:47:32.220989 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.234255 1550381 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:47:32.237186 1550381 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:47:32.237352 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:32.237431 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.266567 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.266592 1550381 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:47:32.266653 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.290056 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.290080 1550381 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:47:32.290087 1550381 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:47:32.290202 1550381 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:47:32.290272 1550381 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:47:32.317281 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:32.317305 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:32.317328 1550381 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:47:32.317382 1550381 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:47:32.317534 1550381 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:47:32.317611 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:47:32.325240 1550381 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:47:32.325360 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:47:32.332953 1550381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:47:32.345753 1550381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:47:32.358201 1550381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:47:32.371135 1550381 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:47:32.374910 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.385004 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.524322 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:32.543517 1550381 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:47:32.543581 1550381 certs.go:195] generating shared ca certs ...
	I1218 01:47:32.543620 1550381 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:32.543768 1550381 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:47:32.543847 1550381 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:47:32.543878 1550381 certs.go:257] generating profile certs ...
	I1218 01:47:32.544012 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:47:32.544110 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:47:32.544194 1550381 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:47:32.544363 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:47:32.544429 1550381 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:47:32.544454 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:47:32.544506 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:47:32.544561 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:47:32.544639 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:47:32.544713 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:32.545379 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:47:32.570494 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:47:32.589292 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:47:32.607511 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:47:32.630085 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:47:32.648120 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:47:32.665293 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:47:32.683115 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:47:32.701108 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:47:32.719384 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:47:32.737332 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:47:32.755228 1550381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:47:32.768547 1550381 ssh_runner.go:195] Run: openssl version
	I1218 01:47:32.775214 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.783201 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:47:32.791100 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794909 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794975 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.836868 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:47:32.844649 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.852089 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:47:32.859827 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863774 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863845 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.904999 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:47:32.912518 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.919928 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:47:32.927254 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.930966 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.931034 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.972378 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:47:32.979895 1550381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:47:32.983509 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:47:33.024763 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:47:33.066928 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:47:33.108240 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:47:33.150820 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:47:33.193721 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:47:33.236344 1550381 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:33.236435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:47:33.236534 1550381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:47:33.262713 1550381 cri.go:89] found id: ""
	I1218 01:47:33.262784 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:47:33.270865 1550381 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:47:33.270885 1550381 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:47:33.270962 1550381 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:47:33.278569 1550381 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:47:33.279133 1550381 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.279389 1550381 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-120615" cluster setting kubeconfig missing "newest-cni-120615" context setting]
	I1218 01:47:33.279869 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.281782 1550381 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:47:33.289414 1550381 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1218 01:47:33.289446 1550381 kubeadm.go:602] duration metric: took 18.555667ms to restartPrimaryControlPlane
	I1218 01:47:33.289461 1550381 kubeadm.go:403] duration metric: took 53.123465ms to StartCluster
	I1218 01:47:33.289476 1550381 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.289537 1550381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.290381 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.290591 1550381 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:47:33.290894 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:33.290942 1550381 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:47:33.291049 1550381 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-120615"
	I1218 01:47:33.291069 1550381 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-120615"
	I1218 01:47:33.291087 1550381 addons.go:70] Setting dashboard=true in profile "newest-cni-120615"
	I1218 01:47:33.291142 1550381 addons.go:239] Setting addon dashboard=true in "newest-cni-120615"
	W1218 01:47:33.291166 1550381 addons.go:248] addon dashboard should already be in state true
	I1218 01:47:33.291217 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291092 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291788 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291956 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291099 1550381 addons.go:70] Setting default-storageclass=true in profile "newest-cni-120615"
	I1218 01:47:33.292357 1550381 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-120615"
	I1218 01:47:33.292683 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.296441 1550381 out.go:179] * Verifying Kubernetes components...
	I1218 01:47:33.299325 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:33.332793 1550381 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:47:33.338698 1550381 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.338720 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:47:33.338786 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.346302 1550381 addons.go:239] Setting addon default-storageclass=true in "newest-cni-120615"
	I1218 01:47:33.346350 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.346767 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.347220 1550381 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:47:33.357584 1550381 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:47:33.364736 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:47:33.364766 1550381 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:47:33.364841 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.384388 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.388779 1550381 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.388806 1550381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:47:33.388870 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.420777 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.424445 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.506937 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:33.590614 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.623167 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.644036 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:47:33.644058 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:47:33.686194 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:47:33.686219 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:47:33.699257 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:47:33.699284 1550381 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:47:33.712575 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:47:33.712598 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:47:33.726008 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:47:33.726036 1550381 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:47:33.739578 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:47:33.739601 1550381 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:47:33.752283 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:47:33.752306 1550381 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:47:33.765197 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:47:33.765228 1550381 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:47:33.778397 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:33.778463 1550381 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:47:33.791499 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:34.144394 1550381 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:47:34.144937 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:34.144564 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145084 1550381 retry.go:31] will retry after 226.399987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144607 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145242 1550381 retry.go:31] will retry after 194.583533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144818 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145308 1550381 retry.go:31] will retry after 316.325527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.341084 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:34.371646 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:34.416769 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.416804 1550381 retry.go:31] will retry after 482.49716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.445473 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.445504 1550381 retry.go:31] will retry after 401.349435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.462702 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:34.529683 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.529767 1550381 retry.go:31] will retry after 466.9672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:34.847135 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:34.899725 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:34.915787 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.915821 1550381 retry.go:31] will retry after 680.448009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.980399 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.980428 1550381 retry.go:31] will retry after 371.155762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.997728 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:35.075146 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.075188 1550381 retry.go:31] will retry after 528.393444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.145511 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:35.352321 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:35.422768 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.422808 1550381 retry.go:31] will retry after 703.678182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.597254 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:35.604769 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:35.645316 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:35.700025 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.700065 1550381 retry.go:31] will retry after 524.167729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:35.720166 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.720199 1550381 retry.go:31] will retry after 843.445988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.127505 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:36.145942 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:36.218437 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.218469 1550381 retry.go:31] will retry after 1.4365249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.224772 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:36.288029 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.288065 1550381 retry.go:31] will retry after 1.092662167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.564433 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:36.628283 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.628318 1550381 retry.go:31] will retry after 821.063441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.645614 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.145021 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.381704 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:37.442129 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.442163 1550381 retry.go:31] will retry after 1.066797005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.450315 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:37.513152 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.513188 1550381 retry.go:31] will retry after 2.094232702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.645565 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.656033 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:37.728287 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.728341 1550381 retry.go:31] will retry after 2.192570718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.145856 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:38.509851 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:38.574127 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.574163 1550381 retry.go:31] will retry after 2.056176901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.645562 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.145843 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.608414 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:39.645902 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:39.677401 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.677446 1550381 retry.go:31] will retry after 2.219986296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.921684 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:39.986039 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.986071 1550381 retry.go:31] will retry after 1.874712757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.145336 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:40.630985 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:40.645468 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:40.721503 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.721589 1550381 retry.go:31] will retry after 5.659633915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.145050 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.861275 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:41.897736 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:41.919445 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.919480 1550381 retry.go:31] will retry after 5.257989291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:41.968013 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.968047 1550381 retry.go:31] will retry after 2.407225539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:42.145507 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:42.645709 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.145827 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.645206 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.145140 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.375521 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:44.445301 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.445333 1550381 retry.go:31] will retry after 6.049252935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.145091 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.646076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.145377 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.381920 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:46.446240 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.446272 1550381 retry.go:31] will retry after 6.470588043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.645629 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.145934 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.178013 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:47.241089 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.241122 1550381 retry.go:31] will retry after 8.808880621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.645680 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.145730 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.646057 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.145645 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.646010 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.145037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.495265 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:50.557628 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.557662 1550381 retry.go:31] will retry after 5.398438748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.645968 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.145305 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.645106 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.145818 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.645593 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.917095 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:53.016010 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.016044 1550381 retry.go:31] will retry after 7.672661981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.145281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:53.645853 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.145129 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.645151 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.145097 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.645490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.957008 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:56.023826 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.023863 1550381 retry.go:31] will retry after 8.13600998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.050917 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:56.116243 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.116276 1550381 retry.go:31] will retry after 5.600895051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.145475 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:56.645854 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.145640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.645927 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.145109 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.645621 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.145858 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.645893 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.145118 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.645093 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.689724 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:00.750450 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:00.750485 1550381 retry.go:31] will retry after 19.327903144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.145862 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.645460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.717566 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:01.782999 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.783030 1550381 retry.go:31] will retry after 18.603092159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:02.145671 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:02.645087 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.145743 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.645040 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.145864 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.161047 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:04.272335 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.272373 1550381 retry.go:31] will retry after 12.170847168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.645651 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.145079 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.645793 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.145198 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.145836 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.645773 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.145131 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.645630 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.145136 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.645143 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.145076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.645910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.146089 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.145142 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.645270 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.145485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.645137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.145724 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.645837 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.146110 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.645847 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.145895 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.444141 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:16.505161 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.505200 1550381 retry.go:31] will retry after 25.656674631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.645612 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.145123 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.645762 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.145134 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.145081 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.645152 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.079482 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:20.141746 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.141779 1550381 retry.go:31] will retry after 22.047786735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.145903 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.387205 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:20.452144 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.452188 1550381 retry.go:31] will retry after 24.810473247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.645470 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.146015 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.645174 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.145273 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.645128 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.145100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.145139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.646075 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.145371 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.645387 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.145943 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.645074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.145918 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.645060 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.145641 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.645873 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.146022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.145074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.645956 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.145849 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.645447 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.145809 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.645085 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.146067 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.645142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:33.645253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:33.669719 1550381 cri.go:89] found id: ""
	I1218 01:48:33.669745 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.669754 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:33.669760 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:33.669817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:33.695127 1550381 cri.go:89] found id: ""
	I1218 01:48:33.695150 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.695159 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:33.695164 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:33.695253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:33.719637 1550381 cri.go:89] found id: ""
	I1218 01:48:33.719659 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.719668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:33.719674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:33.719778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:33.746705 1550381 cri.go:89] found id: ""
	I1218 01:48:33.746731 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.746740 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:33.746746 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:33.746805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:33.774595 1550381 cri.go:89] found id: ""
	I1218 01:48:33.774620 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.774631 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:33.774638 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:33.774696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:33.802090 1550381 cri.go:89] found id: ""
	I1218 01:48:33.802115 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.802123 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:33.802130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:33.802187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:33.827047 1550381 cri.go:89] found id: ""
	I1218 01:48:33.827084 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.827094 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:33.827100 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:33.827172 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:33.855186 1550381 cri.go:89] found id: ""
	I1218 01:48:33.855213 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.855222 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:33.855230 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:33.855241 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:33.910490 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:33.910527 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:33.925321 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:33.925361 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:33.990602 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:33.990624 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:33.990636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:34.016861 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:34.016901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:36.546620 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:36.557304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:36.557390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:36.582868 1550381 cri.go:89] found id: ""
	I1218 01:48:36.582891 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.582900 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:36.582906 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:36.582964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:36.608045 1550381 cri.go:89] found id: ""
	I1218 01:48:36.608067 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.608075 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:36.608081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:36.608137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:36.633385 1550381 cri.go:89] found id: ""
	I1218 01:48:36.633408 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.633417 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:36.633423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:36.633482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:36.657140 1550381 cri.go:89] found id: ""
	I1218 01:48:36.657165 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.657175 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:36.657187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:36.657254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:36.686651 1550381 cri.go:89] found id: ""
	I1218 01:48:36.686673 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.686683 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:36.686689 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:36.686753 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:36.712049 1550381 cri.go:89] found id: ""
	I1218 01:48:36.712073 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.712082 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:36.712089 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:36.712146 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:36.736327 1550381 cri.go:89] found id: ""
	I1218 01:48:36.736355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.736369 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:36.736375 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:36.736432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:36.763059 1550381 cri.go:89] found id: ""
	I1218 01:48:36.763085 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.763094 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:36.763104 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:36.763115 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:36.818060 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:36.818095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:36.833161 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:36.833198 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:36.900981 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:36.901005 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:36.901018 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:36.926395 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:36.926435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:39.461526 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:39.472938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:39.473011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:39.499282 1550381 cri.go:89] found id: ""
	I1218 01:48:39.499309 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.499317 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:39.499324 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:39.499387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:39.524947 1550381 cri.go:89] found id: ""
	I1218 01:48:39.524983 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.524992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:39.524998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:39.525108 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:39.549919 1550381 cri.go:89] found id: ""
	I1218 01:48:39.549944 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.549953 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:39.549959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:39.550021 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:39.574351 1550381 cri.go:89] found id: ""
	I1218 01:48:39.574376 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.574391 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:39.574398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:39.574456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:39.598033 1550381 cri.go:89] found id: ""
	I1218 01:48:39.598054 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.598063 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:39.598069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:39.598133 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:39.626910 1550381 cri.go:89] found id: ""
	I1218 01:48:39.626932 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.626940 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:39.626946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:39.627002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:39.655231 1550381 cri.go:89] found id: ""
	I1218 01:48:39.655302 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.655326 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:39.655346 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:39.655426 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:39.684000 1550381 cri.go:89] found id: ""
	I1218 01:48:39.684079 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.684106 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:39.684129 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:39.684170 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:39.739075 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:39.739109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:39.753861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:39.753890 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:39.817313 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:39.817335 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:39.817347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:39.842685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:39.842727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:42.162239 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:48:42.190324 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:42.249384 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:48:42.249527 1550381 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:48:42.279196 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.279234 1550381 retry.go:31] will retry after 35.148907823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.371473 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:42.382637 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:42.382711 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:42.428461 1550381 cri.go:89] found id: ""
	I1218 01:48:42.428490 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.428499 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:42.428505 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:42.428565 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:42.464484 1550381 cri.go:89] found id: ""
	I1218 01:48:42.464511 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.464520 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:42.464526 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:42.464600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:42.501574 1550381 cri.go:89] found id: ""
	I1218 01:48:42.501644 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.501668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:42.501682 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:42.501756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:42.529255 1550381 cri.go:89] found id: ""
	I1218 01:48:42.529283 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.529292 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:42.529299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:42.529357 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:42.563020 1550381 cri.go:89] found id: ""
	I1218 01:48:42.563093 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.563130 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:42.563153 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:42.563240 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:42.589599 1550381 cri.go:89] found id: ""
	I1218 01:48:42.589672 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.589689 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:42.589697 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:42.589756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:42.620478 1550381 cri.go:89] found id: ""
	I1218 01:48:42.620500 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.620509 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:42.620515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:42.620600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:42.647535 1550381 cri.go:89] found id: ""
	I1218 01:48:42.647560 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.647574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:42.647583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:42.647594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:42.705328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:42.705366 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:42.720602 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:42.720653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:42.791434 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:42.791460 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:42.791474 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:42.816821 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:42.816855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:45.263722 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:48:45.345805 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:48:45.349241 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.349279 1550381 retry.go:31] will retry after 26.611542555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.357893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:45.358009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:45.383950 1550381 cri.go:89] found id: ""
	I1218 01:48:45.383977 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.383986 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:45.383993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:45.384055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:45.429969 1550381 cri.go:89] found id: ""
	I1218 01:48:45.429995 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.430004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:45.430010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:45.430071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:45.493689 1550381 cri.go:89] found id: ""
	I1218 01:48:45.493720 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.493730 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:45.493736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:45.493830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:45.520332 1550381 cri.go:89] found id: ""
	I1218 01:48:45.520355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.520363 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:45.520369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:45.520425 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:45.547181 1550381 cri.go:89] found id: ""
	I1218 01:48:45.547245 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.547270 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:45.547289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:45.547366 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:45.572686 1550381 cri.go:89] found id: ""
	I1218 01:48:45.572754 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.572780 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:45.572804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:45.572879 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:45.596710 1550381 cri.go:89] found id: ""
	I1218 01:48:45.596734 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.596743 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:45.596749 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:45.596809 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:45.622285 1550381 cri.go:89] found id: ""
	I1218 01:48:45.622316 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.622325 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:45.622335 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:45.622345 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:45.680819 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:45.680854 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:45.695825 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:45.695856 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:45.758598 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:45.758621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:45.758634 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:45.783476 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:45.783513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:48.311112 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:48.321845 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:48.321917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:48.347239 1550381 cri.go:89] found id: ""
	I1218 01:48:48.347260 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.347269 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:48.347276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:48.347352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:48.372522 1550381 cri.go:89] found id: ""
	I1218 01:48:48.372548 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.372557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:48.372564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:48.372641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:48.419361 1550381 cri.go:89] found id: ""
	I1218 01:48:48.419385 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.419402 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:48.419409 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:48.419476 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:48.468755 1550381 cri.go:89] found id: ""
	I1218 01:48:48.468780 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.468789 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:48.468795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:48.468865 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:48.499951 1550381 cri.go:89] found id: ""
	I1218 01:48:48.499978 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.499987 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:48.499993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:48.500066 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:48.525758 1550381 cri.go:89] found id: ""
	I1218 01:48:48.525784 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.525793 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:48.525799 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:48.525867 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:48.554959 1550381 cri.go:89] found id: ""
	I1218 01:48:48.554982 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.554991 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:48.554999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:48.555073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:48.579603 1550381 cri.go:89] found id: ""
	I1218 01:48:48.579627 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.579636 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:48.579646 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:48.579682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:48.638239 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:48.638284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:48.652698 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:48.652747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:48.719758 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:48.719781 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:48.719796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:48.744911 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:48.744946 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:51.273570 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:51.283902 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:51.283973 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:51.308033 1550381 cri.go:89] found id: ""
	I1218 01:48:51.308057 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.308065 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:51.308072 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:51.308135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:51.335581 1550381 cri.go:89] found id: ""
	I1218 01:48:51.335604 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.335612 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:51.335618 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:51.335676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:51.364109 1550381 cri.go:89] found id: ""
	I1218 01:48:51.364135 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.364144 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:51.364150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:51.364208 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:51.401663 1550381 cri.go:89] found id: ""
	I1218 01:48:51.401689 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.401698 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:51.401704 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:51.401764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:51.436653 1550381 cri.go:89] found id: ""
	I1218 01:48:51.436679 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.436688 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:51.436696 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:51.436755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:51.484873 1550381 cri.go:89] found id: ""
	I1218 01:48:51.484900 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.484908 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:51.484915 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:51.484972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:51.512364 1550381 cri.go:89] found id: ""
	I1218 01:48:51.512389 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.512398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:51.512404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:51.512463 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:51.536334 1550381 cri.go:89] found id: ""
	I1218 01:48:51.536359 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.536368 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:51.536378 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:51.536389 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:51.590814 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:51.590847 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:51.605410 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:51.605438 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:51.679184 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:51.679247 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:51.679267 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:51.704862 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:51.704898 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:54.232571 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:54.243250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:54.243318 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:54.268694 1550381 cri.go:89] found id: ""
	I1218 01:48:54.268762 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.268776 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:54.268783 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:54.268861 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:54.294766 1550381 cri.go:89] found id: ""
	I1218 01:48:54.294789 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.294798 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:54.294811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:54.294872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:54.319370 1550381 cri.go:89] found id: ""
	I1218 01:48:54.319396 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.319405 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:54.319411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:54.319470 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:54.344762 1550381 cri.go:89] found id: ""
	I1218 01:48:54.344805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.344815 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:54.344839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:54.344928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:54.376778 1550381 cri.go:89] found id: ""
	I1218 01:48:54.376805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.376823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:54.376830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:54.376948 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:54.435510 1550381 cri.go:89] found id: ""
	I1218 01:48:54.435589 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.435620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:54.435641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:54.435763 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:54.481350 1550381 cri.go:89] found id: ""
	I1218 01:48:54.481428 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.481456 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:54.481476 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:54.481621 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:54.520301 1550381 cri.go:89] found id: ""
	I1218 01:48:54.520377 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.520399 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:54.520420 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:54.520457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:54.578993 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:54.579045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:54.595845 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:54.595876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:54.661543 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:54.661566 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:54.661578 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:54.687751 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:54.687803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.222271 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:57.232723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:57.232795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:57.260837 1550381 cri.go:89] found id: ""
	I1218 01:48:57.260858 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.260866 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:57.260872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:57.260928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:57.286122 1550381 cri.go:89] found id: ""
	I1218 01:48:57.286148 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.286156 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:57.286163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:57.286220 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:57.310908 1550381 cri.go:89] found id: ""
	I1218 01:48:57.310930 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.310939 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:57.310945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:57.311005 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:57.336552 1550381 cri.go:89] found id: ""
	I1218 01:48:57.336573 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.336583 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:57.336589 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:57.336681 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:57.363069 1550381 cri.go:89] found id: ""
	I1218 01:48:57.363098 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.363106 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:57.363113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:57.363175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:57.387453 1550381 cri.go:89] found id: ""
	I1218 01:48:57.387483 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.387492 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:57.387499 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:57.387556 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:57.455540 1550381 cri.go:89] found id: ""
	I1218 01:48:57.455567 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.455576 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:57.455583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:57.455641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:57.487729 1550381 cri.go:89] found id: ""
	I1218 01:48:57.487751 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.487759 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:57.487773 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:57.487783 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:57.513517 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:57.513555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.541522 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:57.541591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:57.599250 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:57.599285 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:57.614575 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:57.614612 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:57.685065 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.185435 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:00.217821 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:00.217993 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:00.272675 1550381 cri.go:89] found id: ""
	I1218 01:49:00.272752 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.272781 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:00.272803 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:00.272911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:00.308098 1550381 cri.go:89] found id: ""
	I1218 01:49:00.308130 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.308140 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:00.308148 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:00.308229 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:00.342048 1550381 cri.go:89] found id: ""
	I1218 01:49:00.342083 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.342093 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:00.342102 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:00.342176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:00.373793 1550381 cri.go:89] found id: ""
	I1218 01:49:00.373867 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.373893 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:00.373912 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:00.374032 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:00.453457 1550381 cri.go:89] found id: ""
	I1218 01:49:00.453540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.453562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:00.453580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:00.453674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:00.497069 1550381 cri.go:89] found id: ""
	I1218 01:49:00.497139 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.497165 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:00.497229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:00.497320 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:00.523805 1550381 cri.go:89] found id: ""
	I1218 01:49:00.523883 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.523907 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:00.523925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:00.523998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:00.550245 1550381 cri.go:89] found id: ""
	I1218 01:49:00.550315 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.550338 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:00.550356 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:00.550368 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:00.606138 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:00.606171 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:00.621471 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:00.621501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:00.687608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.687630 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:00.687645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:00.713254 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:00.713288 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:03.251500 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:03.263863 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:03.263937 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:03.292341 1550381 cri.go:89] found id: ""
	I1218 01:49:03.292363 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.292372 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:03.292379 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:03.292444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:03.318593 1550381 cri.go:89] found id: ""
	I1218 01:49:03.318618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.318627 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:03.318633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:03.318713 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:03.342954 1550381 cri.go:89] found id: ""
	I1218 01:49:03.342976 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.342984 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:03.342990 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:03.343056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:03.369216 1550381 cri.go:89] found id: ""
	I1218 01:49:03.369240 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.369255 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:03.369262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:03.369321 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:03.418160 1550381 cri.go:89] found id: ""
	I1218 01:49:03.418196 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.418208 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:03.418234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:03.418314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:03.468056 1550381 cri.go:89] found id: ""
	I1218 01:49:03.468090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.468100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:03.468107 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:03.468177 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:03.493930 1550381 cri.go:89] found id: ""
	I1218 01:49:03.493954 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.493964 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:03.493970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:03.494028 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:03.522766 1550381 cri.go:89] found id: ""
	I1218 01:49:03.522799 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.522808 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:03.522817 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:03.522845 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:03.579881 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:03.579922 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:03.595497 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:03.595533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:03.664750 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:03.664774 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:03.664789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:03.690066 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:03.690102 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:06.220404 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:06.230940 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:06.231013 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:06.258449 1550381 cri.go:89] found id: ""
	I1218 01:49:06.258493 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.258501 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:06.258511 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:06.258570 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:06.284944 1550381 cri.go:89] found id: ""
	I1218 01:49:06.284967 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.284975 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:06.284981 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:06.285038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:06.310888 1550381 cri.go:89] found id: ""
	I1218 01:49:06.310914 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.310923 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:06.310929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:06.310992 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:06.336281 1550381 cri.go:89] found id: ""
	I1218 01:49:06.336306 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.336316 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:06.336322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:06.336384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:06.361424 1550381 cri.go:89] found id: ""
	I1218 01:49:06.361489 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.361507 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:06.361515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:06.361581 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:06.386353 1550381 cri.go:89] found id: ""
	I1218 01:49:06.386381 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.386390 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:06.386396 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:06.386458 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:06.420497 1550381 cri.go:89] found id: ""
	I1218 01:49:06.420523 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.420533 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:06.420540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:06.420599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:06.477983 1550381 cri.go:89] found id: ""
	I1218 01:49:06.478008 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.478017 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:06.478033 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:06.478045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:06.542941 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:06.542988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:06.557943 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:06.557971 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:06.638974 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:06.638996 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:06.639008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:06.665193 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:06.665231 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.197687 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:09.208321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:09.208432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:09.233962 1550381 cri.go:89] found id: ""
	I1218 01:49:09.233985 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.233993 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:09.234000 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:09.234061 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:09.262673 1550381 cri.go:89] found id: ""
	I1218 01:49:09.262697 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.262706 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:09.262712 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:09.262773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:09.289951 1550381 cri.go:89] found id: ""
	I1218 01:49:09.289973 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.289982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:09.289988 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:09.290053 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:09.314541 1550381 cri.go:89] found id: ""
	I1218 01:49:09.314570 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.314578 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:09.314585 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:09.314650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:09.343459 1550381 cri.go:89] found id: ""
	I1218 01:49:09.343484 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.343493 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:09.343500 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:09.343563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:09.376389 1550381 cri.go:89] found id: ""
	I1218 01:49:09.376413 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.376422 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:09.376429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:09.376488 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:09.436490 1550381 cri.go:89] found id: ""
	I1218 01:49:09.436567 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.436591 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:09.436611 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:09.436730 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:09.486769 1550381 cri.go:89] found id: ""
	I1218 01:49:09.486798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.486807 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:09.486817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:09.486827 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:09.512058 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:09.512099 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.540109 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:09.540137 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:09.595196 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:09.595233 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:09.610057 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:09.610088 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:09.676821 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:11.961101 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:49:12.022946 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:12.023052 1550381 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:12.177224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:12.188868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:12.188946 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:12.214139 1550381 cri.go:89] found id: ""
	I1218 01:49:12.214162 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.214171 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:12.214178 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:12.214264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:12.242355 1550381 cri.go:89] found id: ""
	I1218 01:49:12.242380 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.242389 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:12.242395 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:12.242483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:12.266515 1550381 cri.go:89] found id: ""
	I1218 01:49:12.266540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.266548 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:12.266555 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:12.266613 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:12.290463 1550381 cri.go:89] found id: ""
	I1218 01:49:12.290529 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.290545 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:12.290553 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:12.290618 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:12.318223 1550381 cri.go:89] found id: ""
	I1218 01:49:12.318247 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.318256 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:12.318262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:12.318337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:12.342197 1550381 cri.go:89] found id: ""
	I1218 01:49:12.342222 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.342231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:12.342238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:12.342302 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:12.370588 1550381 cri.go:89] found id: ""
	I1218 01:49:12.370611 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.370620 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:12.370626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:12.370688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:12.418224 1550381 cri.go:89] found id: ""
	I1218 01:49:12.418249 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.418258 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:12.418268 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:12.418279 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:12.523068 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:12.523095 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:12.523108 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:12.549040 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:12.549076 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:12.577176 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:12.577201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:12.631665 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:12.631703 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.147547 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:15.158736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:15.158812 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:15.184772 1550381 cri.go:89] found id: ""
	I1218 01:49:15.184838 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.184862 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:15.184881 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:15.184962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:15.210609 1550381 cri.go:89] found id: ""
	I1218 01:49:15.210632 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.210641 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:15.210648 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:15.210712 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:15.238686 1550381 cri.go:89] found id: ""
	I1218 01:49:15.238722 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.238734 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:15.238741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:15.238815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:15.264618 1550381 cri.go:89] found id: ""
	I1218 01:49:15.264675 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.264684 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:15.264692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:15.264757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:15.295205 1550381 cri.go:89] found id: ""
	I1218 01:49:15.295229 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.295244 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:15.295250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:15.295319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:15.320375 1550381 cri.go:89] found id: ""
	I1218 01:49:15.320398 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.320406 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:15.320412 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:15.320472 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:15.345880 1550381 cri.go:89] found id: ""
	I1218 01:49:15.345912 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.345921 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:15.345928 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:15.345989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:15.371477 1550381 cri.go:89] found id: ""
	I1218 01:49:15.371499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.371508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:15.371518 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:15.371530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:15.432289 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:15.432325 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:15.513081 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:15.513118 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.528085 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:15.528163 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:15.589922 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:15.589943 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:15.589955 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:17.429823 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:49:17.494063 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:17.494186 1550381 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:17.497997 1550381 out.go:179] * Enabled addons: 
	I1218 01:49:17.500791 1550381 addons.go:530] duration metric: took 1m44.209848117s for enable addons: enabled=[]
	I1218 01:49:18.115485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:18.126625 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:18.126750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:18.152997 1550381 cri.go:89] found id: ""
	I1218 01:49:18.153031 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.153041 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:18.153048 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:18.153114 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:18.184726 1550381 cri.go:89] found id: ""
	I1218 01:49:18.184748 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.184757 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:18.184764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:18.184833 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:18.213873 1550381 cri.go:89] found id: ""
	I1218 01:49:18.213945 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.213971 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:18.213991 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:18.214081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:18.243010 1550381 cri.go:89] found id: ""
	I1218 01:49:18.243086 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.243109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:18.243128 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:18.243218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:18.267052 1550381 cri.go:89] found id: ""
	I1218 01:49:18.267117 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.267142 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:18.267158 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:18.267246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:18.291939 1550381 cri.go:89] found id: ""
	I1218 01:49:18.292002 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.292026 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:18.292045 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:18.292129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:18.318195 1550381 cri.go:89] found id: ""
	I1218 01:49:18.318219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.318233 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:18.318240 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:18.318299 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:18.346276 1550381 cri.go:89] found id: ""
	I1218 01:49:18.346310 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.346319 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:18.346329 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:18.346341 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:18.407199 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:18.407257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:18.440997 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:18.441077 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:18.537719 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:18.537789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:18.537810 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:18.563514 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:18.563550 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:21.091361 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:21.102189 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:21.102289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:21.130931 1550381 cri.go:89] found id: ""
	I1218 01:49:21.130958 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.130967 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:21.130974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:21.131033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:21.155877 1550381 cri.go:89] found id: ""
	I1218 01:49:21.155951 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.155984 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:21.156004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:21.156088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:21.180785 1550381 cri.go:89] found id: ""
	I1218 01:49:21.180809 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.180818 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:21.180824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:21.180908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:21.206344 1550381 cri.go:89] found id: ""
	I1218 01:49:21.206366 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.206375 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:21.206381 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:21.206441 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:21.230752 1550381 cri.go:89] found id: ""
	I1218 01:49:21.230775 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.230783 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:21.230789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:21.230846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:21.255317 1550381 cri.go:89] found id: ""
	I1218 01:49:21.255391 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.255416 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:21.255436 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:21.255520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:21.284319 1550381 cri.go:89] found id: ""
	I1218 01:49:21.284345 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.284355 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:21.284361 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:21.284420 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:21.313090 1550381 cri.go:89] found id: ""
	I1218 01:49:21.313116 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.313124 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:21.313133 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:21.313143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:21.367961 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:21.367997 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:21.382941 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:21.382972 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:21.496229 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:21.496249 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:21.496261 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:21.526182 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:21.526216 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:24.057294 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:24.070220 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:24.070292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:24.104394 1550381 cri.go:89] found id: ""
	I1218 01:49:24.104419 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.104428 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:24.104434 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:24.104495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:24.129335 1550381 cri.go:89] found id: ""
	I1218 01:49:24.129358 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.129366 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:24.129371 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:24.129429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:24.153339 1550381 cri.go:89] found id: ""
	I1218 01:49:24.153361 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.153370 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:24.153376 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:24.153439 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:24.178645 1550381 cri.go:89] found id: ""
	I1218 01:49:24.178669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.178677 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:24.178684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:24.178742 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:24.202721 1550381 cri.go:89] found id: ""
	I1218 01:49:24.202744 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.202753 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:24.202765 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:24.202827 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:24.228231 1550381 cri.go:89] found id: ""
	I1218 01:49:24.228255 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.228264 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:24.228271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:24.228334 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:24.252564 1550381 cri.go:89] found id: ""
	I1218 01:49:24.252585 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.252593 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:24.252599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:24.252682 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:24.282899 1550381 cri.go:89] found id: ""
	I1218 01:49:24.282975 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.283000 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:24.283015 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:24.283027 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:24.340471 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:24.340506 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:24.355477 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:24.355511 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:24.448676 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:24.448701 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:24.448720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:24.484800 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:24.484875 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:27.016359 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:27.027204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:27.027276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:27.054358 1550381 cri.go:89] found id: ""
	I1218 01:49:27.054383 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.054392 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:27.054398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:27.054456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:27.079191 1550381 cri.go:89] found id: ""
	I1218 01:49:27.079219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.079228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:27.079234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:27.079297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:27.104834 1550381 cri.go:89] found id: ""
	I1218 01:49:27.104856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.104865 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:27.104871 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:27.104943 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:27.134064 1550381 cri.go:89] found id: ""
	I1218 01:49:27.134138 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.134154 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:27.134161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:27.134227 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:27.159891 1550381 cri.go:89] found id: ""
	I1218 01:49:27.159915 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.159925 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:27.159931 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:27.159990 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:27.186008 1550381 cri.go:89] found id: ""
	I1218 01:49:27.186035 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.186044 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:27.186050 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:27.186135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:27.211311 1550381 cri.go:89] found id: ""
	I1218 01:49:27.211337 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.211346 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:27.211352 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:27.211433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:27.236397 1550381 cri.go:89] found id: ""
	I1218 01:49:27.236431 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.236440 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:27.236450 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:27.236461 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:27.293966 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:27.294001 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:27.309317 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:27.309355 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:27.380717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:27.380737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:27.380749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:27.410136 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:27.410175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:29.955798 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:29.968674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:29.968788 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:29.996170 1550381 cri.go:89] found id: ""
	I1218 01:49:29.996197 1550381 logs.go:282] 0 containers: []
	W1218 01:49:29.996208 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:29.996214 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:29.996276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:30.036959 1550381 cri.go:89] found id: ""
	I1218 01:49:30.036983 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.036992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:30.036999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:30.037067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:30.069036 1550381 cri.go:89] found id: ""
	I1218 01:49:30.069065 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.069076 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:30.069092 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:30.069231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:30.098534 1550381 cri.go:89] found id: ""
	I1218 01:49:30.098559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.098568 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:30.098575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:30.098637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:30.127481 1550381 cri.go:89] found id: ""
	I1218 01:49:30.127506 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.127515 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:30.127521 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:30.127588 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:30.153748 1550381 cri.go:89] found id: ""
	I1218 01:49:30.153773 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.153782 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:30.153789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:30.153872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:30.178887 1550381 cri.go:89] found id: ""
	I1218 01:49:30.178913 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.178922 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:30.178929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:30.179010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:30.204533 1550381 cri.go:89] found id: ""
	I1218 01:49:30.204559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.204568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:30.204578 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:30.204589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:30.260146 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:30.260180 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:30.275037 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:30.275067 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:30.338959 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:30.338978 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:30.338990 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:30.364082 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:30.364116 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:32.906096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:32.916660 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:32.916731 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:32.940216 1550381 cri.go:89] found id: ""
	I1218 01:49:32.940238 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.940247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:32.940254 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:32.940314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:32.967934 1550381 cri.go:89] found id: ""
	I1218 01:49:32.967956 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.967963 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:32.967970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:32.968027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:32.991930 1550381 cri.go:89] found id: ""
	I1218 01:49:32.991952 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.991961 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:32.991968 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:32.992027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:33.018215 1550381 cri.go:89] found id: ""
	I1218 01:49:33.018280 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.018303 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:33.018322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:33.018416 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:33.046738 1550381 cri.go:89] found id: ""
	I1218 01:49:33.046783 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.046794 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:33.046801 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:33.046873 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:33.072642 1550381 cri.go:89] found id: ""
	I1218 01:49:33.072669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.072678 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:33.072684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:33.072743 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:33.097687 1550381 cri.go:89] found id: ""
	I1218 01:49:33.097713 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.097722 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:33.097729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:33.097980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:33.125010 1550381 cri.go:89] found id: ""
	I1218 01:49:33.125090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.125107 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:33.125118 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:33.125134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:33.139761 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:33.139795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:33.204966 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:33.204990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:33.205002 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:33.230884 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:33.230929 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:33.263709 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:33.263739 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:35.820022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:35.830483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:35.830552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:35.855134 1550381 cri.go:89] found id: ""
	I1218 01:49:35.855161 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.855170 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:35.855177 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:35.855239 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:35.881968 1550381 cri.go:89] found id: ""
	I1218 01:49:35.881997 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.882006 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:35.882013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:35.882074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:35.907456 1550381 cri.go:89] found id: ""
	I1218 01:49:35.907481 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.907490 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:35.907496 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:35.907555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:35.936819 1550381 cri.go:89] found id: ""
	I1218 01:49:35.936845 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.936854 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:35.936860 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:35.936939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:35.961081 1550381 cri.go:89] found id: ""
	I1218 01:49:35.961107 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.961116 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:35.961123 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:35.961187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:35.985065 1550381 cri.go:89] found id: ""
	I1218 01:49:35.985091 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.985100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:35.985106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:35.985189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:36.013869 1550381 cri.go:89] found id: ""
	I1218 01:49:36.013894 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.013903 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:36.013909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:36.013972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:36.039260 1550381 cri.go:89] found id: ""
	I1218 01:49:36.039283 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.039291 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:36.039300 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:36.039312 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:36.069571 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:36.069659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:36.126151 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:36.126186 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:36.141484 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:36.141514 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:36.209837 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:36.209870 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:36.209883 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:38.735237 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:38.746104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:38.746193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:38.772225 1550381 cri.go:89] found id: ""
	I1218 01:49:38.772252 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.772261 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:38.772268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:38.772330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:38.797393 1550381 cri.go:89] found id: ""
	I1218 01:49:38.797420 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.797429 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:38.797435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:38.797498 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:38.822824 1550381 cri.go:89] found id: ""
	I1218 01:49:38.822847 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.822859 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:38.822868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:38.822927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:38.847877 1550381 cri.go:89] found id: ""
	I1218 01:49:38.847910 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.847919 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:38.847925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:38.847985 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:38.874529 1550381 cri.go:89] found id: ""
	I1218 01:49:38.874555 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.874564 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:38.874570 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:38.874655 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:38.902339 1550381 cri.go:89] found id: ""
	I1218 01:49:38.902406 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.902429 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:38.902447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:38.902535 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:38.927712 1550381 cri.go:89] found id: ""
	I1218 01:49:38.927745 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.927754 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:38.927761 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:38.927830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:38.954870 1550381 cri.go:89] found id: ""
	I1218 01:49:38.954937 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.954964 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:38.954986 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:38.955069 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:39.010028 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:39.010080 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:39.025363 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:39.025392 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:39.091129 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:39.091201 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:39.091221 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:39.116775 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:39.116809 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.650913 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:41.662276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:41.662344 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:41.731218 1550381 cri.go:89] found id: ""
	I1218 01:49:41.731246 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.731255 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:41.731261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:41.731319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:41.756567 1550381 cri.go:89] found id: ""
	I1218 01:49:41.756665 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.756680 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:41.756686 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:41.756755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:41.785421 1550381 cri.go:89] found id: ""
	I1218 01:49:41.785449 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.785458 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:41.785464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:41.785522 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:41.810479 1550381 cri.go:89] found id: ""
	I1218 01:49:41.810501 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.810510 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:41.810524 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:41.810590 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:41.835839 1550381 cri.go:89] found id: ""
	I1218 01:49:41.835863 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.835872 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:41.835878 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:41.835940 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:41.864064 1550381 cri.go:89] found id: ""
	I1218 01:49:41.864092 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.864100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:41.864106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:41.864162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:41.889810 1550381 cri.go:89] found id: ""
	I1218 01:49:41.889880 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.889911 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:41.889924 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:41.889997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:41.913756 1550381 cri.go:89] found id: ""
	I1218 01:49:41.913824 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.913849 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:41.913871 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:41.913902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.943258 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:41.943283 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:41.998631 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:41.998673 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:42.016861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:42.016892 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:42.086550 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:42.086592 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:42.086609 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.616940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:44.627561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:44.627705 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:44.700300 1550381 cri.go:89] found id: ""
	I1218 01:49:44.700322 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.700331 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:44.700337 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:44.700396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:44.736586 1550381 cri.go:89] found id: ""
	I1218 01:49:44.736669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.736685 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:44.736693 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:44.736760 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:44.760996 1550381 cri.go:89] found id: ""
	I1218 01:49:44.761020 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.761029 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:44.761035 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:44.761102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:44.786601 1550381 cri.go:89] found id: ""
	I1218 01:49:44.786637 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.786646 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:44.786655 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:44.786723 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:44.812292 1550381 cri.go:89] found id: ""
	I1218 01:49:44.812314 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.812322 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:44.812329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:44.812415 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:44.838185 1550381 cri.go:89] found id: ""
	I1218 01:49:44.838219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.838229 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:44.838236 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:44.838298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:44.867060 1550381 cri.go:89] found id: ""
	I1218 01:49:44.867081 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.867089 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:44.867095 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:44.867151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:44.892070 1550381 cri.go:89] found id: ""
	I1218 01:49:44.892099 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.892108 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:44.892117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:44.892133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:44.906549 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:44.906575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:44.971842 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:44.971863 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:44.971877 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.997318 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:44.997352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:45.078604 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:45.078658 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.669132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:47.684661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:47.684728 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:47.724476 1550381 cri.go:89] found id: ""
	I1218 01:49:47.724498 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.724509 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:47.724515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:47.724576 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:47.758012 1550381 cri.go:89] found id: ""
	I1218 01:49:47.758036 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.758044 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:47.758051 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:47.758109 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:47.786154 1550381 cri.go:89] found id: ""
	I1218 01:49:47.786180 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.786189 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:47.786196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:47.786258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:47.810902 1550381 cri.go:89] found id: ""
	I1218 01:49:47.810928 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.810937 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:47.810944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:47.811003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:47.836006 1550381 cri.go:89] found id: ""
	I1218 01:49:47.836032 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.836040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:47.836049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:47.836119 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:47.861054 1550381 cri.go:89] found id: ""
	I1218 01:49:47.861078 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.861087 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:47.861094 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:47.861167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:47.889731 1550381 cri.go:89] found id: ""
	I1218 01:49:47.889756 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.889765 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:47.889772 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:47.889829 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:47.918028 1550381 cri.go:89] found id: ""
	I1218 01:49:47.918055 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.918064 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:47.918073 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:47.918090 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.972822 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:47.972860 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:47.987701 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:47.987730 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:48.055884 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:48.055906 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:48.055919 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:48.081983 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:48.082021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.614399 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:50.625532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:50.625607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:50.669636 1550381 cri.go:89] found id: ""
	I1218 01:49:50.669663 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.669672 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:50.669678 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:50.669737 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:50.731793 1550381 cri.go:89] found id: ""
	I1218 01:49:50.731820 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.731829 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:50.731835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:50.731903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:50.758384 1550381 cri.go:89] found id: ""
	I1218 01:49:50.758407 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.758416 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:50.758422 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:50.758481 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:50.783123 1550381 cri.go:89] found id: ""
	I1218 01:49:50.783148 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.783157 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:50.783163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:50.783224 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:50.807986 1550381 cri.go:89] found id: ""
	I1218 01:49:50.808010 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.808019 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:50.808026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:50.808084 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:50.833014 1550381 cri.go:89] found id: ""
	I1218 01:49:50.833037 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.833058 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:50.833066 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:50.833125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:50.857525 1550381 cri.go:89] found id: ""
	I1218 01:49:50.857551 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.857560 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:50.857567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:50.857631 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:50.882511 1550381 cri.go:89] found id: ""
	I1218 01:49:50.882535 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.882543 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:50.882552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:50.882565 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.916936 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:50.916963 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:50.972064 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:50.972098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:50.987003 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:50.987031 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:51.056796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:51.056817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:51.056829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:53.582769 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:53.594237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:53.594316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:53.619778 1550381 cri.go:89] found id: ""
	I1218 01:49:53.619800 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.619809 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:53.619815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:53.619877 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:53.677064 1550381 cri.go:89] found id: ""
	I1218 01:49:53.677087 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.677097 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:53.677103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:53.677179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:53.733772 1550381 cri.go:89] found id: ""
	I1218 01:49:53.733798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.733808 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:53.733815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:53.733876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:53.759569 1550381 cri.go:89] found id: ""
	I1218 01:49:53.759594 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.759603 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:53.759609 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:53.759667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:53.785969 1550381 cri.go:89] found id: ""
	I1218 01:49:53.785993 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.786002 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:53.786008 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:53.786072 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:53.810819 1550381 cri.go:89] found id: ""
	I1218 01:49:53.810843 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.810851 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:53.810858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:53.810923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:53.836207 1550381 cri.go:89] found id: ""
	I1218 01:49:53.836271 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.836295 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:53.836314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:53.836395 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:53.860468 1550381 cri.go:89] found id: ""
	I1218 01:49:53.860499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.860508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:53.860518 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:53.860537 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:53.917328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:53.917365 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:53.932367 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:53.932407 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:54.001703 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:54.001723 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:54.001737 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:54.030548 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:54.030584 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.561340 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:56.571927 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:56.571998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:56.595966 1550381 cri.go:89] found id: ""
	I1218 01:49:56.595996 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.596006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:56.596012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:56.596073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:56.620113 1550381 cri.go:89] found id: ""
	I1218 01:49:56.620136 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.620145 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:56.620151 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:56.620211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:56.655375 1550381 cri.go:89] found id: ""
	I1218 01:49:56.655401 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.655410 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:56.655417 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:56.655477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:56.711903 1550381 cri.go:89] found id: ""
	I1218 01:49:56.711931 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.711940 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:56.711946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:56.712007 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:56.748501 1550381 cri.go:89] found id: ""
	I1218 01:49:56.748527 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.748536 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:56.748542 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:56.748600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:56.774097 1550381 cri.go:89] found id: ""
	I1218 01:49:56.774121 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.774130 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:56.774137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:56.774196 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:56.802594 1550381 cri.go:89] found id: ""
	I1218 01:49:56.802618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.802627 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:56.802633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:56.802690 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:56.827592 1550381 cri.go:89] found id: ""
	I1218 01:49:56.827615 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.827623 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:56.827633 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:56.827645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:56.852403 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:56.852433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.880076 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:56.880109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:56.935675 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:56.935712 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:56.950522 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:56.950549 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:57.019412 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.521100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:59.531832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:59.531908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:59.557309 1550381 cri.go:89] found id: ""
	I1218 01:49:59.557333 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.557342 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:59.557349 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:59.557406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:59.581813 1550381 cri.go:89] found id: ""
	I1218 01:49:59.581889 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.581911 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:59.581919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:59.581978 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:59.605979 1550381 cri.go:89] found id: ""
	I1218 01:49:59.606003 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.606012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:59.606018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:59.606101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:59.631076 1550381 cri.go:89] found id: ""
	I1218 01:49:59.631101 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.631110 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:59.631117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:59.631210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:59.670164 1550381 cri.go:89] found id: ""
	I1218 01:49:59.670189 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.670198 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:59.670205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:59.670309 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:59.706830 1550381 cri.go:89] found id: ""
	I1218 01:49:59.706856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.706865 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:59.706872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:59.706953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:59.739787 1550381 cri.go:89] found id: ""
	I1218 01:49:59.739815 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.739824 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:59.739830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:59.739892 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:59.766523 1550381 cri.go:89] found id: ""
	I1218 01:49:59.766548 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.766558 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:59.766568 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:59.766579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:59.822153 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:59.822193 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:59.837991 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:59.838016 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:59.905967 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.905990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:59.906003 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:59.931368 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:59.931401 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:02.467452 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:02.478157 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:02.478230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:02.504286 1550381 cri.go:89] found id: ""
	I1218 01:50:02.504311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.504321 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:02.504328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:02.504390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:02.530207 1550381 cri.go:89] found id: ""
	I1218 01:50:02.530232 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.530242 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:02.530249 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:02.530308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:02.561278 1550381 cri.go:89] found id: ""
	I1218 01:50:02.561305 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.561314 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:02.561320 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:02.561383 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:02.586119 1550381 cri.go:89] found id: ""
	I1218 01:50:02.586144 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.586153 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:02.586159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:02.586218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:02.611212 1550381 cri.go:89] found id: ""
	I1218 01:50:02.611239 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.611249 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:02.611256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:02.611317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:02.638670 1550381 cri.go:89] found id: ""
	I1218 01:50:02.638697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.638705 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:02.638715 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:02.638819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:02.699868 1550381 cri.go:89] found id: ""
	I1218 01:50:02.699897 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.699906 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:02.699913 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:02.699971 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:02.753340 1550381 cri.go:89] found id: ""
	I1218 01:50:02.753371 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.753381 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:02.753391 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:02.753402 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:02.809735 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:02.809769 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:02.825241 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:02.825271 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:02.894096 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:02.894118 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:02.894130 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:02.919985 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:02.920021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:05.450883 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:05.461914 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:05.461989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:05.487197 1550381 cri.go:89] found id: ""
	I1218 01:50:05.487221 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.487230 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:05.487237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:05.487297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:05.513273 1550381 cri.go:89] found id: ""
	I1218 01:50:05.513304 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.513313 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:05.513321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:05.513385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:05.544168 1550381 cri.go:89] found id: ""
	I1218 01:50:05.544191 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.544200 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:05.544206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:05.544306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:05.570574 1550381 cri.go:89] found id: ""
	I1218 01:50:05.570597 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.570607 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:05.570613 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:05.570675 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:05.598812 1550381 cri.go:89] found id: ""
	I1218 01:50:05.598837 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.598845 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:05.598852 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:05.598915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:05.628314 1550381 cri.go:89] found id: ""
	I1218 01:50:05.628339 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.628348 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:05.628354 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:05.628418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:05.665714 1550381 cri.go:89] found id: ""
	I1218 01:50:05.665742 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.665751 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:05.665757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:05.665817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:05.733576 1550381 cri.go:89] found id: ""
	I1218 01:50:05.733603 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.733624 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:05.733634 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:05.733652 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:05.795404 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:05.795439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:05.811319 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:05.811347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:05.878494 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:05.878517 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:05.878532 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:05.904153 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:05.904185 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.433275 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:08.443880 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:08.443983 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:08.468382 1550381 cri.go:89] found id: ""
	I1218 01:50:08.468408 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.468417 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:08.468424 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:08.468483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:08.498576 1550381 cri.go:89] found id: ""
	I1218 01:50:08.498629 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.498656 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:08.498662 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:08.498764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:08.524767 1550381 cri.go:89] found id: ""
	I1218 01:50:08.524790 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.524799 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:08.524806 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:08.524868 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:08.551353 1550381 cri.go:89] found id: ""
	I1218 01:50:08.551380 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.551399 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:08.551406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:08.551482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:08.577687 1550381 cri.go:89] found id: ""
	I1218 01:50:08.577713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.577722 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:08.577729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:08.577816 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:08.603410 1550381 cri.go:89] found id: ""
	I1218 01:50:08.603434 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.603443 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:08.603450 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:08.603530 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:08.630799 1550381 cri.go:89] found id: ""
	I1218 01:50:08.630824 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.630833 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:08.630840 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:08.630903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:08.705200 1550381 cri.go:89] found id: ""
	I1218 01:50:08.705228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.705237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:08.705247 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:08.705260 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:08.733020 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:08.733047 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:08.798171 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:08.798195 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:08.798217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:08.823651 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:08.823682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.851693 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:08.851720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.407503 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:11.418083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:11.418157 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:11.443131 1550381 cri.go:89] found id: ""
	I1218 01:50:11.443153 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.443161 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:11.443167 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:11.443225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:11.468456 1550381 cri.go:89] found id: ""
	I1218 01:50:11.468480 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.468489 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:11.468495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:11.468559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:11.494875 1550381 cri.go:89] found id: ""
	I1218 01:50:11.494900 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.494910 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:11.494916 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:11.494976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:11.522672 1550381 cri.go:89] found id: ""
	I1218 01:50:11.522695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.522703 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:11.522710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:11.522774 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:11.550689 1550381 cri.go:89] found id: ""
	I1218 01:50:11.550713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.550723 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:11.550729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:11.550789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:11.579573 1550381 cri.go:89] found id: ""
	I1218 01:50:11.579600 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.579608 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:11.579615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:11.579677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:11.605240 1550381 cri.go:89] found id: ""
	I1218 01:50:11.605265 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.605274 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:11.605281 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:11.605348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:11.631171 1550381 cri.go:89] found id: ""
	I1218 01:50:11.631198 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.631208 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:11.631217 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:11.631228 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:11.709937 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:11.709969 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.779988 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:11.780023 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:11.795215 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:11.795243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:11.862143 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:11.862165 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:11.862177 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.389878 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:14.400681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:14.400756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:14.427103 1550381 cri.go:89] found id: ""
	I1218 01:50:14.427127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.427136 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:14.427142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:14.427200 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:14.455157 1550381 cri.go:89] found id: ""
	I1218 01:50:14.455180 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.455189 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:14.455195 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:14.455260 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:14.481712 1550381 cri.go:89] found id: ""
	I1218 01:50:14.481738 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.481752 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:14.481759 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:14.481821 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:14.506286 1550381 cri.go:89] found id: ""
	I1218 01:50:14.506312 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.506320 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:14.506327 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:14.506385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:14.531764 1550381 cri.go:89] found id: ""
	I1218 01:50:14.531789 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.531797 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:14.531804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:14.531864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:14.556792 1550381 cri.go:89] found id: ""
	I1218 01:50:14.556817 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.556826 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:14.556832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:14.556896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:14.581496 1550381 cri.go:89] found id: ""
	I1218 01:50:14.581521 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.581531 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:14.581537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:14.581603 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:14.605950 1550381 cri.go:89] found id: ""
	I1218 01:50:14.605973 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.605982 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:14.605992 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:14.606007 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.631804 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:14.631838 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:14.684967 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:14.685004 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:14.769991 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:14.770039 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:14.785356 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:14.785391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:14.851585 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.353376 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:17.364408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:17.364479 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:17.389035 1550381 cri.go:89] found id: ""
	I1218 01:50:17.389062 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.389071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:17.389077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:17.389141 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:17.418594 1550381 cri.go:89] found id: ""
	I1218 01:50:17.418620 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.418628 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:17.418634 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:17.418693 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:17.444908 1550381 cri.go:89] found id: ""
	I1218 01:50:17.444930 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.444938 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:17.444945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:17.445006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:17.470076 1550381 cri.go:89] found id: ""
	I1218 01:50:17.470100 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.470109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:17.470117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:17.470178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:17.494949 1550381 cri.go:89] found id: ""
	I1218 01:50:17.494972 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.494984 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:17.494992 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:17.495050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:17.523740 1550381 cri.go:89] found id: ""
	I1218 01:50:17.523767 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.523775 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:17.523782 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:17.523840 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:17.551184 1550381 cri.go:89] found id: ""
	I1218 01:50:17.551212 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.551220 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:17.551227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:17.551290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:17.576421 1550381 cri.go:89] found id: ""
	I1218 01:50:17.576446 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.576454 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:17.576464 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:17.576476 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:17.640879 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.640898 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:17.640911 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:17.719096 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:17.719184 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:17.749240 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:17.749266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:17.804542 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:17.804581 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.319731 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:20.329891 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:20.329962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:20.353449 1550381 cri.go:89] found id: ""
	I1218 01:50:20.353471 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.353479 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:20.353485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:20.353542 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:20.378067 1550381 cri.go:89] found id: ""
	I1218 01:50:20.378089 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.378098 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:20.378104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:20.378162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:20.403262 1550381 cri.go:89] found id: ""
	I1218 01:50:20.403288 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.403297 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:20.403304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:20.403362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:20.430817 1550381 cri.go:89] found id: ""
	I1218 01:50:20.430842 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.430851 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:20.430858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:20.430916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:20.456026 1550381 cri.go:89] found id: ""
	I1218 01:50:20.456049 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.456057 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:20.456064 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:20.456123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:20.485362 1550381 cri.go:89] found id: ""
	I1218 01:50:20.485388 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.485397 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:20.485404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:20.485461 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:20.509757 1550381 cri.go:89] found id: ""
	I1218 01:50:20.509779 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.509788 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:20.509794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:20.509851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:20.540098 1550381 cri.go:89] found id: ""
	I1218 01:50:20.540122 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.540130 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:20.540139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:20.540151 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:20.597234 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:20.597269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.611800 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:20.611826 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:20.741195 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:20.741222 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:20.741235 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:20.766650 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:20.766689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:23.295459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:23.306363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:23.306450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:23.331822 1550381 cri.go:89] found id: ""
	I1218 01:50:23.331848 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.331857 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:23.331864 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:23.331925 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:23.357194 1550381 cri.go:89] found id: ""
	I1218 01:50:23.357219 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.357228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:23.357234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:23.357293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:23.383201 1550381 cri.go:89] found id: ""
	I1218 01:50:23.383228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.383238 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:23.383245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:23.383306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:23.409593 1550381 cri.go:89] found id: ""
	I1218 01:50:23.409619 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.409628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:23.409636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:23.409694 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:23.434134 1550381 cri.go:89] found id: ""
	I1218 01:50:23.434157 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.434167 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:23.434173 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:23.434231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:23.458615 1550381 cri.go:89] found id: ""
	I1218 01:50:23.458637 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.458645 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:23.458652 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:23.458714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:23.483411 1550381 cri.go:89] found id: ""
	I1218 01:50:23.483433 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.483441 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:23.483447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:23.483505 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:23.510673 1550381 cri.go:89] found id: ""
	I1218 01:50:23.510697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.510707 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:23.510716 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:23.510727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:23.569129 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:23.569169 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:23.583622 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:23.583654 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:23.660608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:23.660646 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:23.660659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:23.689685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:23.689724 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:26.245910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:26.256314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:26.256387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:26.281224 1550381 cri.go:89] found id: ""
	I1218 01:50:26.281247 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.281257 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:26.281263 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:26.281331 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:26.310540 1550381 cri.go:89] found id: ""
	I1218 01:50:26.310567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.310576 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:26.310583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:26.310642 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:26.336372 1550381 cri.go:89] found id: ""
	I1218 01:50:26.336399 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.336407 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:26.336413 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:26.336473 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:26.362095 1550381 cri.go:89] found id: ""
	I1218 01:50:26.362120 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.362129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:26.362135 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:26.362199 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:26.387399 1550381 cri.go:89] found id: ""
	I1218 01:50:26.387424 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.387433 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:26.387439 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:26.387502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:26.412769 1550381 cri.go:89] found id: ""
	I1218 01:50:26.412794 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.412803 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:26.412809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:26.412878 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:26.437098 1550381 cri.go:89] found id: ""
	I1218 01:50:26.437124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.437132 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:26.437139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:26.437223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:26.462717 1550381 cri.go:89] found id: ""
	I1218 01:50:26.462744 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.462754 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:26.462764 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:26.462782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:26.521734 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:26.521768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:26.536748 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:26.536777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:26.603709 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:26.603730 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:26.603749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:26.632522 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:26.632599 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.191094 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:29.202310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:29.202386 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:29.227851 1550381 cri.go:89] found id: ""
	I1218 01:50:29.227878 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.227887 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:29.227893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:29.227960 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:29.257631 1550381 cri.go:89] found id: ""
	I1218 01:50:29.257656 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.257665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:29.257671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:29.257740 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:29.283590 1550381 cri.go:89] found id: ""
	I1218 01:50:29.283615 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.283625 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:29.283631 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:29.283696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:29.311410 1550381 cri.go:89] found id: ""
	I1218 01:50:29.311436 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.311445 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:29.311452 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:29.311517 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:29.342669 1550381 cri.go:89] found id: ""
	I1218 01:50:29.342695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.342714 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:29.342721 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:29.342815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:29.367296 1550381 cri.go:89] found id: ""
	I1218 01:50:29.367321 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.367330 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:29.367336 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:29.367396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:29.392236 1550381 cri.go:89] found id: ""
	I1218 01:50:29.392260 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.392269 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:29.392275 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:29.392336 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:29.417512 1550381 cri.go:89] found id: ""
	I1218 01:50:29.417538 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.417547 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:29.417556 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:29.417594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:29.488248 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:29.488272 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:29.488289 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:29.513850 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:29.513884 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.543041 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:29.543071 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:29.602048 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:29.602087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.117433 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:32.128498 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:32.128589 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:32.153547 1550381 cri.go:89] found id: ""
	I1218 01:50:32.153571 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.153580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:32.153587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:32.153647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:32.178431 1550381 cri.go:89] found id: ""
	I1218 01:50:32.178455 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.178464 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:32.178471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:32.178529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:32.203336 1550381 cri.go:89] found id: ""
	I1218 01:50:32.203362 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.203371 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:32.203377 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:32.203434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:32.230677 1550381 cri.go:89] found id: ""
	I1218 01:50:32.230702 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.230712 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:32.230718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:32.230800 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:32.255544 1550381 cri.go:89] found id: ""
	I1218 01:50:32.255567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.255576 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:32.255583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:32.255661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:32.282405 1550381 cri.go:89] found id: ""
	I1218 01:50:32.282468 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.282486 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:32.282493 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:32.282551 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:32.311100 1550381 cri.go:89] found id: ""
	I1218 01:50:32.311124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.311133 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:32.311139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:32.311195 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:32.339521 1550381 cri.go:89] found id: ""
	I1218 01:50:32.339550 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.339559 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:32.339568 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:32.339579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:32.364381 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:32.364417 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:32.396991 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:32.397017 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:32.453109 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:32.453144 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.468129 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:32.468158 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:32.534370 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.036282 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:35.048487 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:35.048567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:35.076340 1550381 cri.go:89] found id: ""
	I1218 01:50:35.076365 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.076373 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:35.076386 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:35.076451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:35.104187 1550381 cri.go:89] found id: ""
	I1218 01:50:35.104211 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.104221 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:35.104227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:35.104290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:35.131465 1550381 cri.go:89] found id: ""
	I1218 01:50:35.131536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.131563 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:35.131583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:35.131672 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:35.158198 1550381 cri.go:89] found id: ""
	I1218 01:50:35.158264 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.158281 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:35.158289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:35.158352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:35.185390 1550381 cri.go:89] found id: ""
	I1218 01:50:35.185462 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.185476 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:35.185483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:35.185555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:35.215800 1550381 cri.go:89] found id: ""
	I1218 01:50:35.215893 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.215919 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:35.215946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:35.216046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:35.243559 1550381 cri.go:89] found id: ""
	I1218 01:50:35.243627 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.243652 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:35.243671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:35.243748 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:35.272051 1550381 cri.go:89] found id: ""
	I1218 01:50:35.272079 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.272088 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:35.272099 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:35.272110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:35.328789 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:35.328829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:35.343746 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:35.343791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:35.410255 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.410278 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:35.410290 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:35.436151 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:35.436194 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:37.964765 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:37.975595 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:37.975668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:38.006140 1550381 cri.go:89] found id: ""
	I1218 01:50:38.006168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.006179 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:38.006186 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:38.006254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:38.032670 1550381 cri.go:89] found id: ""
	I1218 01:50:38.032696 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.032704 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:38.032711 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:38.032789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:38.058961 1550381 cri.go:89] found id: ""
	I1218 01:50:38.058991 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.059004 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:38.059013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:38.059086 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:38.093028 1550381 cri.go:89] found id: ""
	I1218 01:50:38.093053 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.093062 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:38.093069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:38.093130 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:38.118000 1550381 cri.go:89] found id: ""
	I1218 01:50:38.118024 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.118033 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:38.118040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:38.118099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:38.143582 1550381 cri.go:89] found id: ""
	I1218 01:50:38.143609 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.143620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:38.143627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:38.143687 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:38.170663 1550381 cri.go:89] found id: ""
	I1218 01:50:38.170692 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.170701 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:38.170707 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:38.170773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:38.195587 1550381 cri.go:89] found id: ""
	I1218 01:50:38.195610 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.195619 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:38.195629 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:38.195640 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:38.250718 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:38.250757 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:38.265740 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:38.265766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:38.332572 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:38.332602 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:38.332653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:38.358827 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:38.358864 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:40.892874 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:40.912835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:40.912911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:40.974270 1550381 cri.go:89] found id: ""
	I1218 01:50:40.974363 1550381 logs.go:282] 0 containers: []
	W1218 01:50:40.974391 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:40.974427 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:40.974538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:41.009749 1550381 cri.go:89] found id: ""
	I1218 01:50:41.009826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.009862 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:41.009893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:41.009999 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:41.036864 1550381 cri.go:89] found id: ""
	I1218 01:50:41.036933 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.036959 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:41.036974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:41.037050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:41.062681 1550381 cri.go:89] found id: ""
	I1218 01:50:41.062708 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.062717 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:41.062723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:41.062785 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:41.088510 1550381 cri.go:89] found id: ""
	I1218 01:50:41.088537 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.088562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:41.088569 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:41.088677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:41.113288 1550381 cri.go:89] found id: ""
	I1218 01:50:41.113311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.113321 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:41.113328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:41.113431 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:41.138413 1550381 cri.go:89] found id: ""
	I1218 01:50:41.138438 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.138447 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:41.138453 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:41.138510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:41.164559 1550381 cri.go:89] found id: ""
	I1218 01:50:41.164592 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.164601 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:41.164612 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:41.164655 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:41.220220 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:41.220257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:41.235147 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:41.235175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:41.301835 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:41.301860 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:41.301873 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:41.327289 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:41.327322 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:43.855149 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:43.865567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:43.865639 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:43.901178 1550381 cri.go:89] found id: ""
	I1218 01:50:43.901222 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.901231 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:43.901237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:43.901308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:43.975051 1550381 cri.go:89] found id: ""
	I1218 01:50:43.975085 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.975095 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:43.975103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:43.975175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:44.002012 1550381 cri.go:89] found id: ""
	I1218 01:50:44.002051 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.002062 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:44.002069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:44.002155 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:44.029977 1550381 cri.go:89] found id: ""
	I1218 01:50:44.030055 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.030090 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:44.030122 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:44.030212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:44.055154 1550381 cri.go:89] found id: ""
	I1218 01:50:44.055182 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.055199 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:44.055206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:44.055264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:44.080010 1550381 cri.go:89] found id: ""
	I1218 01:50:44.080081 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.080118 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:44.080142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:44.080234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:44.106566 1550381 cri.go:89] found id: ""
	I1218 01:50:44.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.106599 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:44.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:44.106685 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:44.130836 1550381 cri.go:89] found id: ""
	I1218 01:50:44.130864 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.130873 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:44.130883 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:44.130894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:44.185795 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:44.185833 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:44.200138 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:44.200164 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:44.265688 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:44.265760 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:44.265786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:44.290625 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:44.290662 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:46.817986 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:46.829340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:46.829433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:46.854080 1550381 cri.go:89] found id: ""
	I1218 01:50:46.854105 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.854113 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:46.854121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:46.854178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:46.894044 1550381 cri.go:89] found id: ""
	I1218 01:50:46.894069 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.894078 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:46.894084 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:46.894144 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:46.979469 1550381 cri.go:89] found id: ""
	I1218 01:50:46.979536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.979561 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:46.979580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:46.979670 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:47.007329 1550381 cri.go:89] found id: ""
	I1218 01:50:47.007393 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.007416 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:47.007435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:47.007524 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:47.036488 1550381 cri.go:89] found id: ""
	I1218 01:50:47.036515 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.036530 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:47.036537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:47.036600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:47.061288 1550381 cri.go:89] found id: ""
	I1218 01:50:47.061318 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.061327 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:47.061334 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:47.061394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:47.086889 1550381 cri.go:89] found id: ""
	I1218 01:50:47.086916 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.086925 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:47.086932 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:47.086995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:47.111795 1550381 cri.go:89] found id: ""
	I1218 01:50:47.111826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.111835 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:47.111844 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:47.111855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:47.166527 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:47.166560 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:47.184211 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:47.184238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:47.251953 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:47.251974 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:47.251986 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:47.277100 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:47.277134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:49.805362 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:49.816269 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:49.816341 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:49.843797 1550381 cri.go:89] found id: ""
	I1218 01:50:49.843820 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.843828 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:49.843834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:49.843894 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:49.869725 1550381 cri.go:89] found id: ""
	I1218 01:50:49.869751 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.869760 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:49.869766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:49.869826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:49.913079 1550381 cri.go:89] found id: ""
	I1218 01:50:49.913102 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.913110 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:49.913117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:49.913175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:49.978366 1550381 cri.go:89] found id: ""
	I1218 01:50:49.978456 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.978481 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:49.978506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:49.978669 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:50.015889 1550381 cri.go:89] found id: ""
	I1218 01:50:50.015961 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.015995 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:50.016015 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:50.016118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:50.043973 1550381 cri.go:89] found id: ""
	I1218 01:50:50.044008 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.044020 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:50.044028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:50.044097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:50.071368 1550381 cri.go:89] found id: ""
	I1218 01:50:50.071397 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.071407 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:50.071415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:50.071492 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:50.100352 1550381 cri.go:89] found id: ""
	I1218 01:50:50.100381 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.100392 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:50.100402 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:50.100414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:50.157120 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:50.157156 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:50.171935 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:50.171962 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:50.243754 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:50.243779 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:50.243792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:50.271841 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:50.271895 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:52.801073 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:52.811866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:52.811938 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:52.841370 1550381 cri.go:89] found id: ""
	I1218 01:50:52.841396 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.841404 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:52.841411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:52.841477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:52.866527 1550381 cri.go:89] found id: ""
	I1218 01:50:52.866549 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.866557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:52.866564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:52.866629 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:52.905295 1550381 cri.go:89] found id: ""
	I1218 01:50:52.905323 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.905333 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:52.905340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:52.905402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:52.976848 1550381 cri.go:89] found id: ""
	I1218 01:50:52.976871 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.976880 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:52.976886 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:52.976945 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:53.005921 1550381 cri.go:89] found id: ""
	I1218 01:50:53.005996 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.006013 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:53.006021 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:53.006096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:53.035172 1550381 cri.go:89] found id: ""
	I1218 01:50:53.035209 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.035219 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:53.035226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:53.035295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:53.062748 1550381 cri.go:89] found id: ""
	I1218 01:50:53.062816 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.062841 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:53.062856 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:53.062933 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:53.088160 1550381 cri.go:89] found id: ""
	I1218 01:50:53.088194 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.088203 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:53.088215 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:53.088227 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:53.143868 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:53.143906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:53.159169 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:53.159240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:53.226415 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:53.226438 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:53.226451 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:53.251410 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:53.251448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:55.783464 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:55.793844 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:55.793915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:55.822511 1550381 cri.go:89] found id: ""
	I1218 01:50:55.822543 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.822552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:55.822559 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:55.822630 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:55.852049 1550381 cri.go:89] found id: ""
	I1218 01:50:55.852076 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.852084 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:55.852090 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:55.852167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:55.877944 1550381 cri.go:89] found id: ""
	I1218 01:50:55.877974 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.877982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:55.877989 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:55.878045 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:55.964104 1550381 cri.go:89] found id: ""
	I1218 01:50:55.964127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.964136 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:55.964142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:55.964198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:55.989628 1550381 cri.go:89] found id: ""
	I1218 01:50:55.989658 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.989667 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:55.989681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:55.989752 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:56.024436 1550381 cri.go:89] found id: ""
	I1218 01:50:56.024465 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.024474 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:56.024480 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:56.024544 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:56.049953 1550381 cri.go:89] found id: ""
	I1218 01:50:56.050028 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.050045 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:56.050053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:56.050118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:56.075666 1550381 cri.go:89] found id: ""
	I1218 01:50:56.075711 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.075720 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:56.075729 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:56.075747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:56.141793 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:56.141818 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:56.141830 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:56.166981 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:56.167013 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:56.193749 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:56.193777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:56.248762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:56.248796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:58.763667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:58.773893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:58.773964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:58.801142 1550381 cri.go:89] found id: ""
	I1218 01:50:58.801168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.801177 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:58.801184 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:58.801255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:58.826909 1550381 cri.go:89] found id: ""
	I1218 01:50:58.826937 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.826946 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:58.826952 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:58.827011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:58.852298 1550381 cri.go:89] found id: ""
	I1218 01:50:58.852328 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.852337 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:58.852343 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:58.852402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:58.877078 1550381 cri.go:89] found id: ""
	I1218 01:50:58.877103 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.877112 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:58.877118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:58.877179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:58.908546 1550381 cri.go:89] found id: ""
	I1218 01:50:58.908572 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.908582 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:58.908588 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:58.908665 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:58.963294 1550381 cri.go:89] found id: ""
	I1218 01:50:58.963327 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.963336 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:58.963342 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:58.963408 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:59.004870 1550381 cri.go:89] found id: ""
	I1218 01:50:59.004907 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.004917 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:59.004923 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:59.004995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:59.030744 1550381 cri.go:89] found id: ""
	I1218 01:50:59.030812 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.030838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:59.030854 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:59.030866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:59.045546 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:59.045575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:59.112855 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:59.112876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:59.112888 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:59.137778 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:59.137857 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:59.165599 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:59.165624 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:01.723994 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:01.734966 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:01.735033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:01.759065 1550381 cri.go:89] found id: ""
	I1218 01:51:01.759093 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.759102 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:01.759108 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:01.759169 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:01.787378 1550381 cri.go:89] found id: ""
	I1218 01:51:01.787406 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.787416 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:01.787421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:01.787490 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:01.812815 1550381 cri.go:89] found id: ""
	I1218 01:51:01.812838 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.812847 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:01.812853 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:01.812912 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:01.838955 1550381 cri.go:89] found id: ""
	I1218 01:51:01.838981 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.838990 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:01.839003 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:01.839062 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:01.864230 1550381 cri.go:89] found id: ""
	I1218 01:51:01.864256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.864266 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:01.864273 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:01.864335 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:01.890158 1550381 cri.go:89] found id: ""
	I1218 01:51:01.890184 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.890193 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:01.890199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:01.890259 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:01.955214 1550381 cri.go:89] found id: ""
	I1218 01:51:01.955289 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.955313 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:01.955332 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:01.955421 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:01.997347 1550381 cri.go:89] found id: ""
	I1218 01:51:01.997414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.997439 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:01.997457 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:01.997469 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:02.054965 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:02.055055 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:02.074503 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:02.074555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:02.144467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:02.144499 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:02.144513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:02.170450 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:02.170493 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:04.704549 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:04.715641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:04.715714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:04.742904 1550381 cri.go:89] found id: ""
	I1218 01:51:04.742928 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.742937 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:04.742943 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:04.743002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:04.768296 1550381 cri.go:89] found id: ""
	I1218 01:51:04.768323 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.768332 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:04.768338 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:04.768400 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:04.794825 1550381 cri.go:89] found id: ""
	I1218 01:51:04.794859 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.794868 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:04.794888 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:04.794953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:04.820347 1550381 cri.go:89] found id: ""
	I1218 01:51:04.820375 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.820383 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:04.820390 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:04.820452 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:04.845796 1550381 cri.go:89] found id: ""
	I1218 01:51:04.845823 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.845832 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:04.845839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:04.845899 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:04.870392 1550381 cri.go:89] found id: ""
	I1218 01:51:04.870418 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.870426 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:04.870433 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:04.870495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:04.918945 1550381 cri.go:89] found id: ""
	I1218 01:51:04.918979 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.918988 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:04.918995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:04.919055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:04.974228 1550381 cri.go:89] found id: ""
	I1218 01:51:04.974255 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.974264 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:04.974273 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:04.974286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:05.042680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:05.042706 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:05.042719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:05.068392 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:05.068427 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:05.097162 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:05.097199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:05.155869 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:05.155910 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:07.671922 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:07.682619 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:07.682688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:07.707484 1550381 cri.go:89] found id: ""
	I1218 01:51:07.707512 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.707521 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:07.707528 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:07.707585 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:07.736732 1550381 cri.go:89] found id: ""
	I1218 01:51:07.736765 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.736774 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:07.736781 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:07.736841 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:07.761774 1550381 cri.go:89] found id: ""
	I1218 01:51:07.761800 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.761809 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:07.761815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:07.761876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:07.790605 1550381 cri.go:89] found id: ""
	I1218 01:51:07.790635 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.790644 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:07.790650 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:07.790714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:07.816203 1550381 cri.go:89] found id: ""
	I1218 01:51:07.816230 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.816239 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:07.816245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:07.816304 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:07.841127 1550381 cri.go:89] found id: ""
	I1218 01:51:07.841150 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.841159 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:07.841165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:07.841225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:07.865946 1550381 cri.go:89] found id: ""
	I1218 01:51:07.866010 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.866036 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:07.866053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:07.866143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:07.916531 1550381 cri.go:89] found id: ""
	I1218 01:51:07.916559 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.916568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:07.916578 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:07.916589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:07.983404 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:07.983433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:08.038790 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:08.038829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:08.055026 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:08.055100 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:08.121982 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:08.122053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:08.122079 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:10.648476 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:10.659206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:10.659275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:10.684487 1550381 cri.go:89] found id: ""
	I1218 01:51:10.684516 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.684525 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:10.684532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:10.684594 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:10.709248 1550381 cri.go:89] found id: ""
	I1218 01:51:10.709278 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.709288 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:10.709294 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:10.709354 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:10.733670 1550381 cri.go:89] found id: ""
	I1218 01:51:10.733700 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.733709 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:10.733716 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:10.733776 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:10.762711 1550381 cri.go:89] found id: ""
	I1218 01:51:10.762734 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.762748 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:10.762755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:10.762814 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:10.791896 1550381 cri.go:89] found id: ""
	I1218 01:51:10.791929 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.791938 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:10.791944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:10.792012 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:10.816916 1550381 cri.go:89] found id: ""
	I1218 01:51:10.816940 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.816951 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:10.816957 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:10.817018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:10.848467 1550381 cri.go:89] found id: ""
	I1218 01:51:10.848533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.848555 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:10.848575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:10.848684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:10.872632 1550381 cri.go:89] found id: ""
	I1218 01:51:10.872694 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.872710 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:10.872719 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:10.872731 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:10.932049 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:10.932119 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:11.006112 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:11.006150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:11.021573 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:11.021602 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:11.086764 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:11.086785 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:11.086798 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:13.613916 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:13.625018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:13.625093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:13.651186 1550381 cri.go:89] found id: ""
	I1218 01:51:13.651211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.651220 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:13.651226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:13.651289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:13.680145 1550381 cri.go:89] found id: ""
	I1218 01:51:13.680172 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.680181 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:13.680187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:13.680246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:13.706941 1550381 cri.go:89] found id: ""
	I1218 01:51:13.706970 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.706980 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:13.706986 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:13.707046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:13.735536 1550381 cri.go:89] found id: ""
	I1218 01:51:13.735562 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.735571 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:13.735578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:13.735637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:13.763111 1550381 cri.go:89] found id: ""
	I1218 01:51:13.763185 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.763209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:13.763227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:13.763313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:13.788754 1550381 cri.go:89] found id: ""
	I1218 01:51:13.788779 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.788787 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:13.788794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:13.788883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:13.813966 1550381 cri.go:89] found id: ""
	I1218 01:51:13.813989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.814004 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:13.814010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:13.814068 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:13.838881 1550381 cri.go:89] found id: ""
	I1218 01:51:13.838907 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.838915 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:13.838925 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:13.838936 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:13.869225 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:13.869250 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:13.928878 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:13.928917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:13.955609 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:13.955639 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:14.045680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:14.045710 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:14.045723 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:16.572096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:16.582596 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:16.582666 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:16.606933 1550381 cri.go:89] found id: ""
	I1218 01:51:16.606963 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.606972 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:16.606979 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:16.607038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:16.631960 1550381 cri.go:89] found id: ""
	I1218 01:51:16.631989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.632004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:16.632010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:16.632071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:16.659171 1550381 cri.go:89] found id: ""
	I1218 01:51:16.659198 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.659207 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:16.659213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:16.659269 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:16.689389 1550381 cri.go:89] found id: ""
	I1218 01:51:16.689414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.689422 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:16.689429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:16.689494 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:16.714209 1550381 cri.go:89] found id: ""
	I1218 01:51:16.714236 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.714246 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:16.714252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:16.714311 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:16.739422 1550381 cri.go:89] found id: ""
	I1218 01:51:16.739450 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.739461 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:16.739467 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:16.739529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:16.765164 1550381 cri.go:89] found id: ""
	I1218 01:51:16.765231 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.765256 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:16.765283 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:16.765372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:16.790914 1550381 cri.go:89] found id: ""
	I1218 01:51:16.790990 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.791014 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:16.791035 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:16.791063 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:16.848408 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:16.848446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:16.864121 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:16.864199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:16.967366 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:16.967436 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:16.967463 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:17.008108 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:17.008145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:19.540127 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:19.550917 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:19.550989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:19.574864 1550381 cri.go:89] found id: ""
	I1218 01:51:19.574939 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.574964 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:19.574978 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:19.575059 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:19.605362 1550381 cri.go:89] found id: ""
	I1218 01:51:19.605386 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.605395 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:19.605401 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:19.605465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:19.631747 1550381 cri.go:89] found id: ""
	I1218 01:51:19.631774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.631789 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:19.631795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:19.631870 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:19.656716 1550381 cri.go:89] found id: ""
	I1218 01:51:19.656740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.656749 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:19.656755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:19.656813 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:19.689179 1550381 cri.go:89] found id: ""
	I1218 01:51:19.689206 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.689215 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:19.689221 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:19.689292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:19.713751 1550381 cri.go:89] found id: ""
	I1218 01:51:19.713774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.713783 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:19.713789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:19.713846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:19.737993 1550381 cri.go:89] found id: ""
	I1218 01:51:19.738063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.738074 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:19.738081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:19.738150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:19.763540 1550381 cri.go:89] found id: ""
	I1218 01:51:19.763565 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.763574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:19.763583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:19.763618 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:19.818946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:19.818982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:19.834461 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:19.834487 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:19.932671 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:19.932695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:19.932708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:19.986050 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:19.986085 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:22.530737 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:22.542075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:22.542151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:22.567921 1550381 cri.go:89] found id: ""
	I1218 01:51:22.567945 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.567953 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:22.567960 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:22.568020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:22.595894 1550381 cri.go:89] found id: ""
	I1218 01:51:22.595919 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.595928 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:22.595933 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:22.595991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:22.620929 1550381 cri.go:89] found id: ""
	I1218 01:51:22.620953 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.620968 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:22.620974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:22.621040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:22.646170 1550381 cri.go:89] found id: ""
	I1218 01:51:22.646195 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.646203 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:22.646210 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:22.646270 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:22.675272 1550381 cri.go:89] found id: ""
	I1218 01:51:22.675296 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.675305 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:22.675312 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:22.675376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:22.702994 1550381 cri.go:89] found id: ""
	I1218 01:51:22.703023 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.703033 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:22.703039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:22.703106 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:22.728507 1550381 cri.go:89] found id: ""
	I1218 01:51:22.728533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.728542 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:22.728548 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:22.728608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:22.754134 1550381 cri.go:89] found id: ""
	I1218 01:51:22.754157 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.754165 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:22.754175 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:22.754187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:22.810488 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:22.810539 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:22.826174 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:22.826212 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:22.906393 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:22.906431 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:22.906448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:22.948969 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:22.949025 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:25.504885 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:25.515607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:25.515676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:25.539969 1550381 cri.go:89] found id: ""
	I1218 01:51:25.539994 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.540003 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:25.540010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:25.540076 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:25.565160 1550381 cri.go:89] found id: ""
	I1218 01:51:25.565189 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.565198 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:25.565204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:25.565262 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:25.593521 1550381 cri.go:89] found id: ""
	I1218 01:51:25.593545 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.593554 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:25.593560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:25.593625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:25.618492 1550381 cri.go:89] found id: ""
	I1218 01:51:25.618523 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.618532 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:25.618538 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:25.618600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:25.642784 1550381 cri.go:89] found id: ""
	I1218 01:51:25.642810 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.642819 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:25.642825 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:25.642885 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:25.667732 1550381 cri.go:89] found id: ""
	I1218 01:51:25.667759 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.667768 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:25.667778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:25.667843 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:25.695444 1550381 cri.go:89] found id: ""
	I1218 01:51:25.695468 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.695477 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:25.695483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:25.695540 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:25.720467 1550381 cri.go:89] found id: ""
	I1218 01:51:25.720492 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.720501 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:25.720510 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:25.720522 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:25.777380 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:25.777416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:25.793106 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:25.793135 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:25.859796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:25.859817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:25.859829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:25.885375 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:25.885414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:28.480490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:28.491517 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:28.491587 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:28.528988 1550381 cri.go:89] found id: ""
	I1218 01:51:28.529011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.529020 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:28.529027 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:28.529088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:28.554389 1550381 cri.go:89] found id: ""
	I1218 01:51:28.554415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.554423 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:28.554429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:28.554491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:28.595339 1550381 cri.go:89] found id: ""
	I1218 01:51:28.595365 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.595374 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:28.595380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:28.595440 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:28.620349 1550381 cri.go:89] found id: ""
	I1218 01:51:28.620376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.620384 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:28.620391 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:28.620451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:28.644815 1550381 cri.go:89] found id: ""
	I1218 01:51:28.644844 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.644854 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:28.644862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:28.644923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:28.669719 1550381 cri.go:89] found id: ""
	I1218 01:51:28.669746 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.669755 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:28.669762 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:28.669822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:28.694390 1550381 cri.go:89] found id: ""
	I1218 01:51:28.694415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.694424 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:28.694430 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:28.694491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:28.719213 1550381 cri.go:89] found id: ""
	I1218 01:51:28.719238 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.719247 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:28.719257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:28.719268 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:28.777972 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:28.778010 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:28.792667 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:28.792698 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:28.863732 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:28.863755 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:28.863768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:28.896538 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:28.896571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.484234 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:31.494710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:31.494781 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:31.519036 1550381 cri.go:89] found id: ""
	I1218 01:51:31.519061 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.519070 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:31.519077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:31.519136 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:31.543677 1550381 cri.go:89] found id: ""
	I1218 01:51:31.543702 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.543710 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:31.543717 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:31.543778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:31.570267 1550381 cri.go:89] found id: ""
	I1218 01:51:31.570299 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.570308 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:31.570315 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:31.570406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:31.597988 1550381 cri.go:89] found id: ""
	I1218 01:51:31.598024 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.598034 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:31.598040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:31.598102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:31.625949 1550381 cri.go:89] found id: ""
	I1218 01:51:31.625983 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.625993 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:31.626014 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:31.626097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:31.654833 1550381 cri.go:89] found id: ""
	I1218 01:51:31.654898 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.654923 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:31.654937 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:31.655011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:31.686105 1550381 cri.go:89] found id: ""
	I1218 01:51:31.686132 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.686143 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:31.686149 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:31.686233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:31.711106 1550381 cri.go:89] found id: ""
	I1218 01:51:31.711139 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.711148 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:31.711158 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:31.711187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:31.725923 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:31.725952 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:31.789766 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:31.789789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:31.789801 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:31.815524 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:31.815558 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.843690 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:31.843718 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.403611 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:34.414490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:34.414564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:34.438520 1550381 cri.go:89] found id: ""
	I1218 01:51:34.438544 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.438552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:34.438562 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:34.438625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:34.462603 1550381 cri.go:89] found id: ""
	I1218 01:51:34.462627 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.462636 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:34.462642 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:34.462699 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:34.490371 1550381 cri.go:89] found id: ""
	I1218 01:51:34.490395 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.490404 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:34.490410 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:34.490471 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:34.513456 1550381 cri.go:89] found id: ""
	I1218 01:51:34.513480 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.513488 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:34.513495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:34.513562 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:34.537361 1550381 cri.go:89] found id: ""
	I1218 01:51:34.537385 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.537394 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:34.537407 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:34.537468 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:34.561230 1550381 cri.go:89] found id: ""
	I1218 01:51:34.561253 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.561261 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:34.561268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:34.561348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:34.585180 1550381 cri.go:89] found id: ""
	I1218 01:51:34.585204 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.585212 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:34.585219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:34.585280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:34.609741 1550381 cri.go:89] found id: ""
	I1218 01:51:34.609766 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.609775 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:34.609785 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:34.609802 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.667204 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:34.667238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:34.682240 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:34.682269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:34.745795 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:34.745817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:34.745831 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:34.771222 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:34.771256 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.302139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:37.313213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:37.313316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:37.348873 1550381 cri.go:89] found id: ""
	I1218 01:51:37.348895 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.348903 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:37.348909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:37.348966 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:37.374229 1550381 cri.go:89] found id: ""
	I1218 01:51:37.374256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.374265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:37.374271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:37.374332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:37.398897 1550381 cri.go:89] found id: ""
	I1218 01:51:37.398920 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.398928 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:37.398935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:37.398991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:37.422904 1550381 cri.go:89] found id: ""
	I1218 01:51:37.422930 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.422939 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:37.422946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:37.423010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:37.451168 1550381 cri.go:89] found id: ""
	I1218 01:51:37.451196 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.451205 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:37.451211 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:37.451273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:37.477986 1550381 cri.go:89] found id: ""
	I1218 01:51:37.478011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.478021 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:37.478028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:37.478096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:37.504463 1550381 cri.go:89] found id: ""
	I1218 01:51:37.504487 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.504497 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:37.504503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:37.504563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:37.529381 1550381 cri.go:89] found id: ""
	I1218 01:51:37.529405 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.529414 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:37.529423 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:37.529435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:37.598285 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:37.598307 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:37.598319 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:37.623017 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:37.623052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.654645 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:37.654674 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:37.711304 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:37.711339 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.226741 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:40.238408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:40.238480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:40.263769 1550381 cri.go:89] found id: ""
	I1218 01:51:40.263795 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.263804 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:40.263810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:40.263896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:40.289194 1550381 cri.go:89] found id: ""
	I1218 01:51:40.289220 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.289228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:40.289234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:40.289292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:40.314040 1550381 cri.go:89] found id: ""
	I1218 01:51:40.314064 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.314073 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:40.314079 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:40.314137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:40.339145 1550381 cri.go:89] found id: ""
	I1218 01:51:40.339180 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.339189 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:40.339212 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:40.339293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:40.364902 1550381 cri.go:89] found id: ""
	I1218 01:51:40.364931 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.364940 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:40.364947 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:40.365009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:40.389709 1550381 cri.go:89] found id: ""
	I1218 01:51:40.389730 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.389739 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:40.389745 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:40.389804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:40.414858 1550381 cri.go:89] found id: ""
	I1218 01:51:40.414882 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.414891 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:40.414898 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:40.414958 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:40.441847 1550381 cri.go:89] found id: ""
	I1218 01:51:40.441875 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.441884 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:40.441893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:40.441906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.456791 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:40.456821 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:40.525853 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:40.525876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:40.525889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:40.550993 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:40.551028 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:40.581756 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:40.581786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.139640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:43.166426 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:43.166501 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:43.205967 1550381 cri.go:89] found id: ""
	I1218 01:51:43.206046 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.206071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:43.206091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:43.206223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:43.234922 1550381 cri.go:89] found id: ""
	I1218 01:51:43.234950 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.234958 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:43.234964 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:43.235023 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:43.261353 1550381 cri.go:89] found id: ""
	I1218 01:51:43.261376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.261385 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:43.261392 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:43.261482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:43.286879 1550381 cri.go:89] found id: ""
	I1218 01:51:43.286906 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.286915 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:43.286922 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:43.286982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:43.312530 1550381 cri.go:89] found id: ""
	I1218 01:51:43.312554 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.312568 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:43.312575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:43.312667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:43.337185 1550381 cri.go:89] found id: ""
	I1218 01:51:43.337207 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.337217 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:43.337223 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:43.337280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:43.361707 1550381 cri.go:89] found id: ""
	I1218 01:51:43.361731 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.361741 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:43.361747 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:43.361805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:43.391450 1550381 cri.go:89] found id: ""
	I1218 01:51:43.391483 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.391492 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:43.391502 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:43.391513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.449067 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:43.449104 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:43.464299 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:43.464329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:43.534945 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:43.534968 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:43.534980 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:43.560324 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:43.560357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:46.089618 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:46.100369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:46.100466 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:46.125679 1550381 cri.go:89] found id: ""
	I1218 01:51:46.125705 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.125714 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:46.125722 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:46.125789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:46.187262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.187300 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.187310 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:46.187317 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:46.187376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:46.244106 1550381 cri.go:89] found id: ""
	I1218 01:51:46.244130 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.244139 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:46.244145 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:46.244212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:46.269674 1550381 cri.go:89] found id: ""
	I1218 01:51:46.269740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.269769 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:46.269787 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:46.269876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:46.299177 1550381 cri.go:89] found id: ""
	I1218 01:51:46.299199 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.299209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:46.299215 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:46.299273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:46.328469 1550381 cri.go:89] found id: ""
	I1218 01:51:46.328491 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.328499 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:46.328506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:46.328564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:46.354262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.354288 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.354297 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:46.354304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:46.354362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:46.378724 1550381 cri.go:89] found id: ""
	I1218 01:51:46.378752 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.378761 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:46.378770 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:46.378781 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:46.433721 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:46.433759 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:46.448259 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:46.448295 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:46.511060 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:46.511081 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:46.511093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:46.536601 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:46.536803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.070137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:49.081049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:49.081123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:49.106438 1550381 cri.go:89] found id: ""
	I1218 01:51:49.106465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.106474 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:49.106483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:49.106546 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:49.131233 1550381 cri.go:89] found id: ""
	I1218 01:51:49.131257 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.131265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:49.131272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:49.131337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:49.194204 1550381 cri.go:89] found id: ""
	I1218 01:51:49.194233 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.194242 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:49.194248 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:49.194310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:49.244013 1550381 cri.go:89] found id: ""
	I1218 01:51:49.244039 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.244048 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:49.244054 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:49.244120 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:49.271185 1550381 cri.go:89] found id: ""
	I1218 01:51:49.271211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.271219 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:49.271226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:49.271288 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:49.298143 1550381 cri.go:89] found id: ""
	I1218 01:51:49.298170 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.298180 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:49.298187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:49.298251 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:49.324346 1550381 cri.go:89] found id: ""
	I1218 01:51:49.324374 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.324383 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:49.324389 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:49.324450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:49.350033 1550381 cri.go:89] found id: ""
	I1218 01:51:49.350063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.350072 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:49.350081 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:49.350094 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.382558 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:49.382589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:49.438756 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:49.438795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:49.453736 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:49.453765 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:49.515649 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:49.515672 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:49.515684 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:52.041321 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:52.052329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:52.052403 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:52.082403 1550381 cri.go:89] found id: ""
	I1218 01:51:52.082434 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.082444 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:52.082451 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:52.082513 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:52.108691 1550381 cri.go:89] found id: ""
	I1218 01:51:52.108720 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.108729 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:52.108735 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:52.108795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:52.138279 1550381 cri.go:89] found id: ""
	I1218 01:51:52.138314 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.138323 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:52.138329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:52.138393 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:52.207039 1550381 cri.go:89] found id: ""
	I1218 01:51:52.207067 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.207076 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:52.207083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:52.207150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:52.236007 1550381 cri.go:89] found id: ""
	I1218 01:51:52.236042 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.236052 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:52.236059 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:52.236125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:52.267547 1550381 cri.go:89] found id: ""
	I1218 01:51:52.267583 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.267593 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:52.267599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:52.267668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:52.295275 1550381 cri.go:89] found id: ""
	I1218 01:51:52.295310 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.295320 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:52.295326 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:52.295407 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:52.324187 1550381 cri.go:89] found id: ""
	I1218 01:51:52.324215 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.324224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:52.324234 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:52.324246 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:52.352151 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:52.352182 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:52.408412 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:52.408446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:52.423024 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:52.423098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:52.488577 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:52.488599 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:52.488613 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.015396 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:55.026777 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:55.026851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:55.052687 1550381 cri.go:89] found id: ""
	I1218 01:51:55.052713 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.052722 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:55.052728 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:55.052786 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:55.082492 1550381 cri.go:89] found id: ""
	I1218 01:51:55.082515 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.082524 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:55.082531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:55.082592 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:55.107565 1550381 cri.go:89] found id: ""
	I1218 01:51:55.107592 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.107600 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:55.107607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:55.107674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:55.135213 1550381 cri.go:89] found id: ""
	I1218 01:51:55.135241 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.135249 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:55.135270 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:55.135332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:55.177099 1550381 cri.go:89] found id: ""
	I1218 01:51:55.177128 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.177137 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:55.177143 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:55.177210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:55.224917 1550381 cri.go:89] found id: ""
	I1218 01:51:55.224946 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.224954 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:55.224961 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:55.225020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:55.252438 1550381 cri.go:89] found id: ""
	I1218 01:51:55.252465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.252473 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:55.252479 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:55.252538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:55.277054 1550381 cri.go:89] found id: ""
	I1218 01:51:55.277074 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.277082 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:55.277091 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:55.277106 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:55.292214 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:55.292240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:55.354379 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:55.354401 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:55.354412 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.379112 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:55.379143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:55.407257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:55.407284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:57.964281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:57.975020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:57.975088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:58.005630 1550381 cri.go:89] found id: ""
	I1218 01:51:58.005658 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.005667 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:58.005674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:58.005745 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:58.032296 1550381 cri.go:89] found id: ""
	I1218 01:51:58.032319 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.032329 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:58.032335 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:58.032402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:58.061454 1550381 cri.go:89] found id: ""
	I1218 01:51:58.061479 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.061488 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:58.061495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:58.061554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:58.087783 1550381 cri.go:89] found id: ""
	I1218 01:51:58.087808 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.087817 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:58.087824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:58.087884 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:58.115473 1550381 cri.go:89] found id: ""
	I1218 01:51:58.115496 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.115505 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:58.115512 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:58.115599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:58.152731 1550381 cri.go:89] found id: ""
	I1218 01:51:58.152757 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.152766 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:58.152773 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:58.152832 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:58.207262 1550381 cri.go:89] found id: ""
	I1218 01:51:58.207284 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.207302 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:58.207310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:58.207367 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:58.244074 1550381 cri.go:89] found id: ""
	I1218 01:51:58.244103 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.244112 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:58.244121 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:58.244133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:58.305417 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:58.305455 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:58.320298 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:58.320326 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:58.392177 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:58.392200 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:58.392215 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:58.418264 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:58.418299 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:00.947037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:00.958414 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:00.958504 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:00.982432 1550381 cri.go:89] found id: ""
	I1218 01:52:00.982456 1550381 logs.go:282] 0 containers: []
	W1218 01:52:00.982465 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:00.982472 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:00.982554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:01.011620 1550381 cri.go:89] found id: ""
	I1218 01:52:01.011645 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.011654 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:01.011661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:01.011721 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:01.038538 1550381 cri.go:89] found id: ""
	I1218 01:52:01.038564 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.038572 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:01.038578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:01.038636 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:01.062732 1550381 cri.go:89] found id: ""
	I1218 01:52:01.062758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.062768 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:01.062775 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:01.062836 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:01.088130 1550381 cri.go:89] found id: ""
	I1218 01:52:01.088156 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.088165 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:01.088172 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:01.088241 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:01.116412 1550381 cri.go:89] found id: ""
	I1218 01:52:01.116440 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.116450 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:01.116471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:01.116532 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:01.157710 1550381 cri.go:89] found id: ""
	I1218 01:52:01.157737 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.157747 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:01.157754 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:01.157815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:01.207757 1550381 cri.go:89] found id: ""
	I1218 01:52:01.207784 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.207794 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:01.207803 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:01.207815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:01.293467 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:01.293515 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:01.308790 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:01.308825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:01.377467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:01.377487 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:01.377501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:01.403688 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:01.403722 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:03.936540 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:03.947485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:03.947559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:03.972917 1550381 cri.go:89] found id: ""
	I1218 01:52:03.972939 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.972947 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:03.972953 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:03.973018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:03.997960 1550381 cri.go:89] found id: ""
	I1218 01:52:03.997983 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.997992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:03.997998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:03.998056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:04.027683 1550381 cri.go:89] found id: ""
	I1218 01:52:04.027754 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.027780 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:04.027808 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:04.027916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:04.054769 1550381 cri.go:89] found id: ""
	I1218 01:52:04.054833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.054843 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:04.054849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:04.054917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:04.081260 1550381 cri.go:89] found id: ""
	I1218 01:52:04.081284 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.081293 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:04.081299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:04.081372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:04.106563 1550381 cri.go:89] found id: ""
	I1218 01:52:04.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.106599 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:04.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:04.106667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:04.131682 1550381 cri.go:89] found id: ""
	I1218 01:52:04.131708 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.131717 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:04.131724 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:04.131790 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:04.170215 1550381 cri.go:89] found id: ""
	I1218 01:52:04.170242 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.170251 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:04.170260 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:04.170273 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:04.211169 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:04.211207 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:04.263603 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:04.263636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:04.319257 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:04.319294 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:04.334300 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:04.334329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:04.399992 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:06.900248 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:06.910997 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:06.911067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:06.935514 1550381 cri.go:89] found id: ""
	I1218 01:52:06.935539 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.935548 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:06.935554 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:06.935612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:06.959911 1550381 cri.go:89] found id: ""
	I1218 01:52:06.959933 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.959942 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:06.959949 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:06.960006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:06.989689 1550381 cri.go:89] found id: ""
	I1218 01:52:06.989710 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.989719 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:06.989725 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:06.989783 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:07.016553 1550381 cri.go:89] found id: ""
	I1218 01:52:07.016578 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.016587 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:07.016594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:07.016676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:07.042084 1550381 cri.go:89] found id: ""
	I1218 01:52:07.042106 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.042115 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:07.042121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:07.042179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:07.067075 1550381 cri.go:89] found id: ""
	I1218 01:52:07.067097 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.067107 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:07.067113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:07.067176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:07.096366 1550381 cri.go:89] found id: ""
	I1218 01:52:07.096388 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.096398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:07.096405 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:07.096465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:07.125403 1550381 cri.go:89] found id: ""
	I1218 01:52:07.125426 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.125434 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:07.125444 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:07.125456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:07.146124 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:07.146152 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:07.254257 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:07.254280 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:07.254292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:07.280552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:07.280590 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:07.307796 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:07.307825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:09.873637 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:09.884205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:09.884275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:09.909771 1550381 cri.go:89] found id: ""
	I1218 01:52:09.909796 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.909805 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:09.909812 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:09.909869 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:09.934051 1550381 cri.go:89] found id: ""
	I1218 01:52:09.934082 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.934092 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:09.934098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:09.934161 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:09.964504 1550381 cri.go:89] found id: ""
	I1218 01:52:09.964528 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.964550 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:09.964561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:09.964662 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:09.990501 1550381 cri.go:89] found id: ""
	I1218 01:52:09.990525 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.990534 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:09.990543 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:09.990616 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:10.028312 1550381 cri.go:89] found id: ""
	I1218 01:52:10.028339 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.028348 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:10.028355 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:10.028419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:10.054415 1550381 cri.go:89] found id: ""
	I1218 01:52:10.054443 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.054453 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:10.054460 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:10.054545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:10.085976 1550381 cri.go:89] found id: ""
	I1218 01:52:10.086003 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.086013 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:10.086020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:10.086081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:10.112422 1550381 cri.go:89] found id: ""
	I1218 01:52:10.112455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.112464 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:10.112473 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:10.112485 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:10.214552 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:10.214579 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:10.214591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:10.245834 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:10.245872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:10.278949 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:10.278983 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:10.338117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:10.338153 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:12.853298 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:12.863919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:12.864003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:12.888289 1550381 cri.go:89] found id: ""
	I1218 01:52:12.888315 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.888324 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:12.888330 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:12.888389 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:12.914281 1550381 cri.go:89] found id: ""
	I1218 01:52:12.914306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.914315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:12.914321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:12.914384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:12.941058 1550381 cri.go:89] found id: ""
	I1218 01:52:12.941083 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.941092 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:12.941098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:12.941160 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:12.966998 1550381 cri.go:89] found id: ""
	I1218 01:52:12.967022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.967030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:12.967037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:12.967095 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:12.996005 1550381 cri.go:89] found id: ""
	I1218 01:52:12.996027 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.996036 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:12.996042 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:12.996099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:13.023321 1550381 cri.go:89] found id: ""
	I1218 01:52:13.023345 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.023354 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:13.023360 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:13.023429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:13.049195 1550381 cri.go:89] found id: ""
	I1218 01:52:13.049220 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.049229 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:13.049235 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:13.049295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:13.074787 1550381 cri.go:89] found id: ""
	I1218 01:52:13.074816 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.074825 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:13.074835 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:13.074874 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:13.131893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:13.131926 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:13.159867 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:13.159942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:13.281047 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:13.281070 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:13.281089 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:13.307183 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:13.307217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:15.837707 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:15.848404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:15.848478 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:15.873587 1550381 cri.go:89] found id: ""
	I1218 01:52:15.873615 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.873624 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:15.873630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:15.873689 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:15.897757 1550381 cri.go:89] found id: ""
	I1218 01:52:15.897780 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.897788 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:15.897795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:15.897852 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:15.923098 1550381 cri.go:89] found id: ""
	I1218 01:52:15.923123 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.923132 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:15.923138 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:15.923231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:15.952891 1550381 cri.go:89] found id: ""
	I1218 01:52:15.952921 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.952929 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:15.952935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:15.952991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:15.979178 1550381 cri.go:89] found id: ""
	I1218 01:52:15.979204 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.979212 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:15.979218 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:15.979276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:16.007995 1550381 cri.go:89] found id: ""
	I1218 01:52:16.008022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.008031 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:16.008038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:16.008101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:16.032581 1550381 cri.go:89] found id: ""
	I1218 01:52:16.032607 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.032616 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:16.032641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:16.032709 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:16.058847 1550381 cri.go:89] found id: ""
	I1218 01:52:16.058872 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.058881 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:16.058891 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:16.058902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:16.116382 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:16.116416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:16.131483 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:16.131513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:16.233031 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:16.233053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:16.233066 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:16.262932 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:16.262966 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:18.790616 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:18.801658 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:18.801729 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:18.830076 1550381 cri.go:89] found id: ""
	I1218 01:52:18.830102 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.830112 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:18.830118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:18.830179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:18.855278 1550381 cri.go:89] found id: ""
	I1218 01:52:18.855306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.855315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:18.855321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:18.855380 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:18.886976 1550381 cri.go:89] found id: ""
	I1218 01:52:18.886998 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.887012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:18.887018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:18.887078 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:18.911656 1550381 cri.go:89] found id: ""
	I1218 01:52:18.911678 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.911686 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:18.911692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:18.911750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:18.935981 1550381 cri.go:89] found id: ""
	I1218 01:52:18.936002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.936011 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:18.936017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:18.936074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:18.960773 1550381 cri.go:89] found id: ""
	I1218 01:52:18.960795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.960804 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:18.960811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:18.960871 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:18.985996 1550381 cri.go:89] found id: ""
	I1218 01:52:18.986023 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.986032 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:18.986039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:18.986101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:19.011618 1550381 cri.go:89] found id: ""
	I1218 01:52:19.011696 1550381 logs.go:282] 0 containers: []
	W1218 01:52:19.011719 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:19.011740 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:19.011766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:19.027064 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:19.027093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:19.094483 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:19.094507 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:19.094519 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:19.120053 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:19.120087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:19.190394 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:19.190426 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:21.774413 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:21.785229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:21.785300 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:21.814294 1550381 cri.go:89] found id: ""
	I1218 01:52:21.814316 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.814325 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:21.814331 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:21.814394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:21.840168 1550381 cri.go:89] found id: ""
	I1218 01:52:21.840191 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.840200 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:21.840207 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:21.840267 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:21.865098 1550381 cri.go:89] found id: ""
	I1218 01:52:21.865120 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.865129 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:21.865134 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:21.865198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:21.890513 1550381 cri.go:89] found id: ""
	I1218 01:52:21.890535 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.890543 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:21.890550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:21.890607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:21.915362 1550381 cri.go:89] found id: ""
	I1218 01:52:21.915384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.915393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:21.915399 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:21.915457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:21.941078 1550381 cri.go:89] found id: ""
	I1218 01:52:21.941101 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.941110 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:21.941117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:21.941182 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:21.965276 1550381 cri.go:89] found id: ""
	I1218 01:52:21.965302 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.965311 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:21.965318 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:21.965375 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:21.990348 1550381 cri.go:89] found id: ""
	I1218 01:52:21.990370 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.990378 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:21.990387 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:21.990398 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:22.046097 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:22.046132 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:22.061468 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:22.061498 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:22.129867 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:22.129889 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:22.129901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:22.160943 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:22.160982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:24.703063 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:24.713938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:24.714009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:24.739085 1550381 cri.go:89] found id: ""
	I1218 01:52:24.739167 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.739189 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:24.739209 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:24.739298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:24.763316 1550381 cri.go:89] found id: ""
	I1218 01:52:24.763359 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.763368 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:24.763374 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:24.763443 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:24.789401 1550381 cri.go:89] found id: ""
	I1218 01:52:24.789431 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.789441 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:24.789471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:24.789558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:24.819426 1550381 cri.go:89] found id: ""
	I1218 01:52:24.819458 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.819468 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:24.819474 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:24.819547 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:24.844106 1550381 cri.go:89] found id: ""
	I1218 01:52:24.844143 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.844152 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:24.844159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:24.844230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:24.868116 1550381 cri.go:89] found id: ""
	I1218 01:52:24.868140 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.868149 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:24.868156 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:24.868213 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:24.892247 1550381 cri.go:89] found id: ""
	I1218 01:52:24.892280 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.892289 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:24.892311 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:24.892390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:24.917988 1550381 cri.go:89] found id: ""
	I1218 01:52:24.918013 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.918022 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:24.918031 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:24.918060 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:24.972539 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:24.972571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:24.987364 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:24.987391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:25.066535 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:25.066557 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:25.066572 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:25.093529 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:25.093573 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.627215 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:27.637795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:27.637864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:27.661825 1550381 cri.go:89] found id: ""
	I1218 01:52:27.661850 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.661859 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:27.661866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:27.661931 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:27.688769 1550381 cri.go:89] found id: ""
	I1218 01:52:27.688795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.688803 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:27.688810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:27.688895 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:27.714909 1550381 cri.go:89] found id: ""
	I1218 01:52:27.714992 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.715009 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:27.715017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:27.715080 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:27.742595 1550381 cri.go:89] found id: ""
	I1218 01:52:27.742620 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.742628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:27.742636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:27.742695 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:27.768328 1550381 cri.go:89] found id: ""
	I1218 01:52:27.768353 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.768361 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:27.768368 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:27.768444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:27.794968 1550381 cri.go:89] found id: ""
	I1218 01:52:27.794993 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.795003 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:27.795010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:27.795094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:27.821560 1550381 cri.go:89] found id: ""
	I1218 01:52:27.821587 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.821597 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:27.821603 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:27.821679 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:27.846888 1550381 cri.go:89] found id: ""
	I1218 01:52:27.846912 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.846921 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:27.846930 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:27.846942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:27.861757 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:27.861785 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:27.926373 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:27.926400 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:27.926413 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:27.951763 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:27.951803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.984249 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:27.984278 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.543132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:30.553809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:30.553883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:30.580729 1550381 cri.go:89] found id: ""
	I1218 01:52:30.580758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.580767 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:30.580774 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:30.580837 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:30.611455 1550381 cri.go:89] found id: ""
	I1218 01:52:30.611479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.611488 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:30.611494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:30.611558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:30.637976 1550381 cri.go:89] found id: ""
	I1218 01:52:30.638002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.638025 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:30.638049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:30.638134 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:30.663110 1550381 cri.go:89] found id: ""
	I1218 01:52:30.663135 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.663144 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:30.663150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:30.663211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:30.689367 1550381 cri.go:89] found id: ""
	I1218 01:52:30.689391 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.689401 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:30.689416 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:30.689480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:30.714721 1550381 cri.go:89] found id: ""
	I1218 01:52:30.714747 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.714756 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:30.714764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:30.714826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:30.740391 1550381 cri.go:89] found id: ""
	I1218 01:52:30.740419 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.740428 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:30.740438 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:30.740502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:30.769197 1550381 cri.go:89] found id: ""
	I1218 01:52:30.769264 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.769286 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:30.769306 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:30.769337 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.825762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:30.825799 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:30.840467 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:30.840497 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:30.907063 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:30.907085 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:30.907098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:30.933175 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:30.933208 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.464940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:33.477904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:33.477982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:33.502677 1550381 cri.go:89] found id: ""
	I1218 01:52:33.502703 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.502711 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:33.502718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:33.502778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:33.528314 1550381 cri.go:89] found id: ""
	I1218 01:52:33.528341 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.528350 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:33.528356 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:33.528418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:33.554186 1550381 cri.go:89] found id: ""
	I1218 01:52:33.554213 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.554221 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:33.554227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:33.554286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:33.578717 1550381 cri.go:89] found id: ""
	I1218 01:52:33.578740 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.578751 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:33.578758 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:33.578819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:33.603980 1550381 cri.go:89] found id: ""
	I1218 01:52:33.604054 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.604079 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:33.604098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:33.604287 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:33.629122 1550381 cri.go:89] found id: ""
	I1218 01:52:33.629149 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.629158 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:33.629165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:33.629248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:33.660229 1550381 cri.go:89] found id: ""
	I1218 01:52:33.660266 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.660281 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:33.660288 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:33.660356 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:33.685746 1550381 cri.go:89] found id: ""
	I1218 01:52:33.685812 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.685838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:33.685854 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:33.685866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.717052 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:33.717078 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:33.777106 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:33.777142 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:33.791689 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:33.791719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:33.855601 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:33.855621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:33.855633 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:36.380440 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:36.395133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:36.395206 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:36.463112 1550381 cri.go:89] found id: ""
	I1218 01:52:36.463145 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.463154 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:36.463162 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:36.463235 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:36.489631 1550381 cri.go:89] found id: ""
	I1218 01:52:36.489656 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.489665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:36.489671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:36.489733 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:36.515149 1550381 cri.go:89] found id: ""
	I1218 01:52:36.515175 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.515186 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:36.515192 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:36.515253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:36.543702 1550381 cri.go:89] found id: ""
	I1218 01:52:36.543727 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.543736 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:36.543743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:36.543802 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:36.568359 1550381 cri.go:89] found id: ""
	I1218 01:52:36.568384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.568393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:36.568400 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:36.568457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:36.591933 1550381 cri.go:89] found id: ""
	I1218 01:52:36.591959 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.591968 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:36.591974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:36.592033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:36.619454 1550381 cri.go:89] found id: ""
	I1218 01:52:36.619479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.619488 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:36.619494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:36.619552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:36.644231 1550381 cri.go:89] found id: ""
	I1218 01:52:36.644256 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.644265 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:36.644274 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:36.644286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:36.673981 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:36.674008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:36.730614 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:36.730648 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:36.745581 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:36.745614 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:36.808564 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:36.808591 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:36.808604 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.334388 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:39.345831 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:39.345904 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:39.374463 1550381 cri.go:89] found id: ""
	I1218 01:52:39.374486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.374495 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:39.374501 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:39.374567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:39.439153 1550381 cri.go:89] found id: ""
	I1218 01:52:39.439178 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.439187 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:39.439196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:39.439255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:39.483631 1550381 cri.go:89] found id: ""
	I1218 01:52:39.483655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.483664 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:39.483670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:39.483746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:39.513656 1550381 cri.go:89] found id: ""
	I1218 01:52:39.513681 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.513689 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:39.513695 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:39.513757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:39.538364 1550381 cri.go:89] found id: ""
	I1218 01:52:39.538389 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.538397 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:39.538404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:39.538469 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:39.562963 1550381 cri.go:89] found id: ""
	I1218 01:52:39.562989 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.562997 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:39.563004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:39.563063 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:39.590225 1550381 cri.go:89] found id: ""
	I1218 01:52:39.590247 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.590255 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:39.590261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:39.590317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:39.619590 1550381 cri.go:89] found id: ""
	I1218 01:52:39.619613 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.619622 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:39.619631 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:39.619642 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.645098 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:39.645133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:39.675338 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:39.675370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:39.731953 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:39.731988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:39.746929 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:39.746957 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:39.815336 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.315631 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:42.327549 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:42.327635 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:42.355093 1550381 cri.go:89] found id: ""
	I1218 01:52:42.355117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.355126 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:42.355133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:42.355193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:42.383724 1550381 cri.go:89] found id: ""
	I1218 01:52:42.383746 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.383755 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:42.383763 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:42.383822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:42.439728 1550381 cri.go:89] found id: ""
	I1218 01:52:42.439752 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.439761 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:42.439767 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:42.439826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:42.485723 1550381 cri.go:89] found id: ""
	I1218 01:52:42.485751 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.485760 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:42.485766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:42.485835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:42.518003 1550381 cri.go:89] found id: ""
	I1218 01:52:42.518030 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.518040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:42.518046 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:42.518105 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:42.542509 1550381 cri.go:89] found id: ""
	I1218 01:52:42.542534 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.542543 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:42.542550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:42.542608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:42.567103 1550381 cri.go:89] found id: ""
	I1218 01:52:42.567127 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.567135 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:42.567144 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:42.567210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:42.591556 1550381 cri.go:89] found id: ""
	I1218 01:52:42.591623 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.591648 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:42.591670 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:42.591708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:42.622840 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:42.622867 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:42.677917 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:42.677950 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:42.692666 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:42.692699 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:42.765474 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.765497 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:42.765509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.291290 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:45.308807 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:45.308972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:45.342117 1550381 cri.go:89] found id: ""
	I1218 01:52:45.342151 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.342160 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:45.342168 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:45.342233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:45.370490 1550381 cri.go:89] found id: ""
	I1218 01:52:45.370516 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.370525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:45.370531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:45.370612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:45.416227 1550381 cri.go:89] found id: ""
	I1218 01:52:45.416262 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.416272 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:45.416278 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:45.416359 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:45.475986 1550381 cri.go:89] found id: ""
	I1218 01:52:45.476010 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.476019 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:45.476026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:45.476089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:45.505307 1550381 cri.go:89] found id: ""
	I1218 01:52:45.505375 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.505400 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:45.505419 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:45.505520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:45.531649 1550381 cri.go:89] found id: ""
	I1218 01:52:45.531676 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.531685 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:45.531691 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:45.531762 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:45.557231 1550381 cri.go:89] found id: ""
	I1218 01:52:45.557258 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.557268 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:45.557274 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:45.557332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:45.581819 1550381 cri.go:89] found id: ""
	I1218 01:52:45.581846 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.581855 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:45.581864 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:45.581876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:45.637946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:45.637982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:45.653092 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:45.653127 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:45.733673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:45.733695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:45.733708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.759208 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:45.759243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:48.291278 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:48.302161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:48.302234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:48.326549 1550381 cri.go:89] found id: ""
	I1218 01:52:48.326572 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.326580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:48.326587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:48.326647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:48.355829 1550381 cri.go:89] found id: ""
	I1218 01:52:48.355853 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.355863 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:48.355869 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:48.355927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:48.384367 1550381 cri.go:89] found id: ""
	I1218 01:52:48.384404 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.384414 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:48.384421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:48.384495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:48.440457 1550381 cri.go:89] found id: ""
	I1218 01:52:48.440486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.440495 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:48.440502 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:48.440572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:48.484538 1550381 cri.go:89] found id: ""
	I1218 01:52:48.484565 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.484574 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:48.484580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:48.484671 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:48.517629 1550381 cri.go:89] found id: ""
	I1218 01:52:48.517655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.517664 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:48.517670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:48.517727 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:48.544213 1550381 cri.go:89] found id: ""
	I1218 01:52:48.544250 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.544259 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:48.544268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:48.544338 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:48.571178 1550381 cri.go:89] found id: ""
	I1218 01:52:48.571214 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.571224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:48.571233 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:48.571244 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:48.629108 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:48.629154 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:48.644078 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:48.644105 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:48.710322 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:48.710345 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:48.710357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:48.735873 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:48.735908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:51.264224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:51.274867 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:51.274936 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:51.302544 1550381 cri.go:89] found id: ""
	I1218 01:52:51.302574 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.302582 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:51.302591 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:51.302650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:51.326887 1550381 cri.go:89] found id: ""
	I1218 01:52:51.326920 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.326929 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:51.326935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:51.326996 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:51.355805 1550381 cri.go:89] found id: ""
	I1218 01:52:51.355833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.355842 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:51.355849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:51.355910 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:51.385402 1550381 cri.go:89] found id: ""
	I1218 01:52:51.385475 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.385502 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:51.385516 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:51.385597 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:51.429600 1550381 cri.go:89] found id: ""
	I1218 01:52:51.429679 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.429705 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:51.429723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:51.429795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:51.482295 1550381 cri.go:89] found id: ""
	I1218 01:52:51.482362 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.482386 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:51.482406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:51.482483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:51.509210 1550381 cri.go:89] found id: ""
	I1218 01:52:51.509282 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.509307 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:51.509319 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:51.509392 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:51.534258 1550381 cri.go:89] found id: ""
	I1218 01:52:51.534335 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.534359 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:51.534374 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:51.534399 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:51.590233 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:51.590266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:51.604772 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:51.604807 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:51.669210 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:51.669233 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:51.669245 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:51.694168 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:51.694201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:54.225084 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:54.235834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:54.235909 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:54.263169 1550381 cri.go:89] found id: ""
	I1218 01:52:54.263202 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.263212 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:54.263219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:54.263286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:54.288775 1550381 cri.go:89] found id: ""
	I1218 01:52:54.288801 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.288812 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:54.288818 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:54.288881 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:54.313424 1550381 cri.go:89] found id: ""
	I1218 01:52:54.313455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.313463 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:54.313470 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:54.313545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:54.337557 1550381 cri.go:89] found id: ""
	I1218 01:52:54.337586 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.337595 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:54.337604 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:54.337660 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:54.362944 1550381 cri.go:89] found id: ""
	I1218 01:52:54.362968 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.362976 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:54.362983 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:54.363055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:54.405526 1550381 cri.go:89] found id: ""
	I1218 01:52:54.405546 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.405554 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:54.405560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:54.405617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:54.470952 1550381 cri.go:89] found id: ""
	I1218 01:52:54.470975 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.470983 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:54.470995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:54.471051 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:54.499299 1550381 cri.go:89] found id: ""
	I1218 01:52:54.499324 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.499332 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:54.499341 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:54.499352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:54.554755 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:54.554791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:54.569411 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:54.569439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:54.630717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:54.630737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:54.630751 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:54.656160 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:54.656197 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.184460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:57.195292 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:57.195360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:57.220784 1550381 cri.go:89] found id: ""
	I1218 01:52:57.220821 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.220831 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:57.220837 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:57.220911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:57.245470 1550381 cri.go:89] found id: ""
	I1218 01:52:57.245493 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.245501 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:57.245508 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:57.245572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:57.271053 1550381 cri.go:89] found id: ""
	I1218 01:52:57.271076 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.271084 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:57.271091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:57.271149 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:57.297094 1550381 cri.go:89] found id: ""
	I1218 01:52:57.297117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.297125 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:57.297132 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:57.297189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:57.321869 1550381 cri.go:89] found id: ""
	I1218 01:52:57.321903 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.321913 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:57.321919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:57.321980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:57.346700 1550381 cri.go:89] found id: ""
	I1218 01:52:57.346726 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.346736 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:57.346743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:57.346804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:57.371462 1550381 cri.go:89] found id: ""
	I1218 01:52:57.371487 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.371496 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:57.371503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:57.371561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:57.408706 1550381 cri.go:89] found id: ""
	I1218 01:52:57.408725 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.408733 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:57.408742 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:57.408754 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:57.518131 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:57.518152 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:57.518165 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:57.544836 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:57.544872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.572743 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:57.572782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:57.635526 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:57.635567 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.150459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:00.169757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:00.169839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:00.240442 1550381 cri.go:89] found id: ""
	I1218 01:53:00.240472 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.240482 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:00.240489 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:00.240568 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:00.297137 1550381 cri.go:89] found id: ""
	I1218 01:53:00.297224 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.297243 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:00.297253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:00.297363 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:00.336217 1550381 cri.go:89] found id: ""
	I1218 01:53:00.336242 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.336251 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:00.336259 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:00.336333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:00.365991 1550381 cri.go:89] found id: ""
	I1218 01:53:00.366020 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.366030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:00.366037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:00.366107 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:00.425076 1550381 cri.go:89] found id: ""
	I1218 01:53:00.425152 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.425177 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:00.425198 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:00.425310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:00.464180 1550381 cri.go:89] found id: ""
	I1218 01:53:00.464259 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.464291 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:00.464313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:00.464419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:00.498012 1550381 cri.go:89] found id: ""
	I1218 01:53:00.498088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.498112 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:00.498133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:00.498248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:00.526153 1550381 cri.go:89] found id: ""
	I1218 01:53:00.526228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.526250 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:00.526271 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:00.526313 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:00.581384 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:00.581418 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.596391 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:00.596467 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:00.665518 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:00.665541 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:00.665554 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:00.691014 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:00.691052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:03.221071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:03.232071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:03.232143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:03.256975 1550381 cri.go:89] found id: ""
	I1218 01:53:03.256998 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.257006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:03.257012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:03.257070 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:03.286981 1550381 cri.go:89] found id: ""
	I1218 01:53:03.287006 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.287021 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:03.287028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:03.287089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:03.315833 1550381 cri.go:89] found id: ""
	I1218 01:53:03.315858 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.315867 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:03.315873 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:03.315935 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:03.343588 1550381 cri.go:89] found id: ""
	I1218 01:53:03.343611 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.343619 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:03.343626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:03.343684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:03.369440 1550381 cri.go:89] found id: ""
	I1218 01:53:03.369469 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.369478 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:03.369485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:03.369545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:03.428115 1550381 cri.go:89] found id: ""
	I1218 01:53:03.428138 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.428147 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:03.428154 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:03.428211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:03.484823 1550381 cri.go:89] found id: ""
	I1218 01:53:03.484847 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.484856 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:03.484862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:03.484920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:03.512094 1550381 cri.go:89] found id: ""
	I1218 01:53:03.512119 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.512128 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:03.512139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:03.512150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:03.568376 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:03.568411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:03.583603 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:03.583632 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:03.651107 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:03.651129 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:03.651143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:03.676088 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:03.676125 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.206266 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:06.217464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:06.217558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:06.242745 1550381 cri.go:89] found id: ""
	I1218 01:53:06.242770 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.242779 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:06.242786 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:06.242846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:06.267735 1550381 cri.go:89] found id: ""
	I1218 01:53:06.267757 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.267765 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:06.267771 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:06.267834 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:06.297274 1550381 cri.go:89] found id: ""
	I1218 01:53:06.297297 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.297306 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:06.297313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:06.297372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:06.326794 1550381 cri.go:89] found id: ""
	I1218 01:53:06.326820 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.326829 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:06.326835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:06.326893 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:06.351519 1550381 cri.go:89] found id: ""
	I1218 01:53:06.351543 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.351552 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:06.351558 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:06.351617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:06.378499 1550381 cri.go:89] found id: ""
	I1218 01:53:06.378525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.378534 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:06.378540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:06.378598 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:06.414203 1550381 cri.go:89] found id: ""
	I1218 01:53:06.414236 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.414246 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:06.414252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:06.414316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:06.493089 1550381 cri.go:89] found id: ""
	I1218 01:53:06.493116 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.493125 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:06.493134 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:06.493147 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.522114 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:06.522145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:06.578855 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:06.578891 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:06.594005 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:06.594033 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:06.658779 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:06.658800 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:06.658814 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.183921 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:09.194857 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:09.194928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:09.218740 1550381 cri.go:89] found id: ""
	I1218 01:53:09.218764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.218772 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:09.218778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:09.218835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:09.243853 1550381 cri.go:89] found id: ""
	I1218 01:53:09.243879 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.243888 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:09.243894 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:09.243954 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:09.269591 1550381 cri.go:89] found id: ""
	I1218 01:53:09.269615 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.269624 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:09.269630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:09.269691 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:09.299082 1550381 cri.go:89] found id: ""
	I1218 01:53:09.299120 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.299129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:09.299136 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:09.299207 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:09.324088 1550381 cri.go:89] found id: ""
	I1218 01:53:09.324121 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.324131 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:09.324137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:09.324203 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:09.348898 1550381 cri.go:89] found id: ""
	I1218 01:53:09.348921 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.348930 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:09.348936 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:09.348997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:09.374245 1550381 cri.go:89] found id: ""
	I1218 01:53:09.374268 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.374279 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:09.374286 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:09.374346 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:09.413630 1550381 cri.go:89] found id: ""
	I1218 01:53:09.413653 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.413662 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:09.413672 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:09.413689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:09.474660 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:09.474685 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:09.541382 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:09.541403 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:09.541416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.566761 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:09.566792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:09.593984 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:09.594011 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.149658 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:12.160130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:12.160258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:12.185266 1550381 cri.go:89] found id: ""
	I1218 01:53:12.185339 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.185356 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:12.185363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:12.185434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:12.212092 1550381 cri.go:89] found id: ""
	I1218 01:53:12.212124 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.212133 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:12.212139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:12.212205 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:12.235977 1550381 cri.go:89] found id: ""
	I1218 01:53:12.236009 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.236018 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:12.236024 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:12.236091 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:12.260037 1550381 cri.go:89] found id: ""
	I1218 01:53:12.260069 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.260079 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:12.260085 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:12.260151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:12.285034 1550381 cri.go:89] found id: ""
	I1218 01:53:12.285060 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.285069 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:12.285075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:12.285142 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:12.309185 1550381 cri.go:89] found id: ""
	I1218 01:53:12.309221 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.309231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:12.309256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:12.309330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:12.333588 1550381 cri.go:89] found id: ""
	I1218 01:53:12.333613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.333622 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:12.333629 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:12.333697 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:12.362204 1550381 cri.go:89] found id: ""
	I1218 01:53:12.362228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.362237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:12.362246 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:12.362292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.427192 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:12.431443 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:12.465023 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:12.465048 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:12.534431 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:12.534453 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:12.534465 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:12.560311 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:12.560349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:15.088443 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:15.100075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:15.100170 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:15.126386 1550381 cri.go:89] found id: ""
	I1218 01:53:15.126410 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.126419 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:15.126425 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:15.126493 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:15.152426 1550381 cri.go:89] found id: ""
	I1218 01:53:15.152450 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.152459 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:15.152466 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:15.152529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:15.178155 1550381 cri.go:89] found id: ""
	I1218 01:53:15.178184 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.178193 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:15.178199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:15.178263 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:15.203664 1550381 cri.go:89] found id: ""
	I1218 01:53:15.203687 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.203696 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:15.203703 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:15.203767 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:15.228792 1550381 cri.go:89] found id: ""
	I1218 01:53:15.228815 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.228823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:15.228830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:15.228891 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:15.257550 1550381 cri.go:89] found id: ""
	I1218 01:53:15.257575 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.257585 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:15.257594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:15.257656 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:15.283324 1550381 cri.go:89] found id: ""
	I1218 01:53:15.283350 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.283359 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:15.283365 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:15.283430 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:15.311422 1550381 cri.go:89] found id: ""
	I1218 01:53:15.311455 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.311465 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:15.311474 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:15.311486 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:15.367419 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:15.367456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:15.382340 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:15.382370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:15.500526 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:15.500551 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:15.500563 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:15.527154 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:15.527190 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:18.057588 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:18.068726 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:18.068799 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:18.096722 1550381 cri.go:89] found id: ""
	I1218 01:53:18.096859 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.096895 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:18.096919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:18.097001 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:18.121827 1550381 cri.go:89] found id: ""
	I1218 01:53:18.121851 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.121860 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:18.121866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:18.121932 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:18.146993 1550381 cri.go:89] found id: ""
	I1218 01:53:18.147018 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.147028 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:18.147034 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:18.147094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:18.171236 1550381 cri.go:89] found id: ""
	I1218 01:53:18.171258 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.171266 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:18.171272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:18.171333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:18.199330 1550381 cri.go:89] found id: ""
	I1218 01:53:18.199355 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.199367 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:18.199373 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:18.199432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:18.225625 1550381 cri.go:89] found id: ""
	I1218 01:53:18.225649 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.225659 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:18.225666 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:18.225746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:18.250702 1550381 cri.go:89] found id: ""
	I1218 01:53:18.250725 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.250734 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:18.250741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:18.250854 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:18.276500 1550381 cri.go:89] found id: ""
	I1218 01:53:18.276525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.276534 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:18.276543 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:18.276559 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:18.333753 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:18.333788 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:18.350466 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:18.350520 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:18.431435 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:18.431467 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:18.431480 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:18.463849 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:18.463889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:21.008824 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:21.019970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:21.020040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:21.044583 1550381 cri.go:89] found id: ""
	I1218 01:53:21.044607 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.044616 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:21.044641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:21.044701 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:21.069261 1550381 cri.go:89] found id: ""
	I1218 01:53:21.069286 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.069295 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:21.069301 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:21.069360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:21.099196 1550381 cri.go:89] found id: ""
	I1218 01:53:21.099219 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.099228 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:21.099234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:21.099298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:21.124519 1550381 cri.go:89] found id: ""
	I1218 01:53:21.124541 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.124550 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:21.124556 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:21.124707 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:21.153447 1550381 cri.go:89] found id: ""
	I1218 01:53:21.153474 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.153483 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:21.153503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:21.153561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:21.178670 1550381 cri.go:89] found id: ""
	I1218 01:53:21.178694 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.178702 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:21.178709 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:21.178770 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:21.207919 1550381 cri.go:89] found id: ""
	I1218 01:53:21.207944 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.207953 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:21.207959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:21.208017 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:21.232478 1550381 cri.go:89] found id: ""
	I1218 01:53:21.232503 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.232512 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:21.232521 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:21.232533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:21.287757 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:21.287789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:21.302312 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:21.302349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:21.366377 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:21.366399 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:21.366411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:21.393029 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:21.393110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:23.948667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:23.959340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:23.959436 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:23.986999 1550381 cri.go:89] found id: ""
	I1218 01:53:23.987024 1550381 logs.go:282] 0 containers: []
	W1218 01:53:23.987033 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:23.987040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:23.987103 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:24.020720 1550381 cri.go:89] found id: ""
	I1218 01:53:24.020799 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.020833 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:24.020846 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:24.020920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:24.047235 1550381 cri.go:89] found id: ""
	I1218 01:53:24.047267 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.047283 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:24.047299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:24.047373 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:24.080575 1550381 cri.go:89] found id: ""
	I1218 01:53:24.080599 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.080608 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:24.080615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:24.080706 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:24.105557 1550381 cri.go:89] found id: ""
	I1218 01:53:24.105585 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.105595 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:24.105601 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:24.105661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:24.130738 1550381 cri.go:89] found id: ""
	I1218 01:53:24.130764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.130773 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:24.130779 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:24.130839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:24.159061 1550381 cri.go:89] found id: ""
	I1218 01:53:24.159088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.159097 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:24.159104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:24.159166 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:24.187647 1550381 cri.go:89] found id: ""
	I1218 01:53:24.187674 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.187684 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:24.187694 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:24.187704 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:24.242513 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:24.242544 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:24.257316 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:24.257396 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:24.320000 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:24.320020 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:24.320037 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:24.346099 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:24.346136 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:26.873531 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:26.885238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:26.885314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:26.910216 1550381 cri.go:89] found id: ""
	I1218 01:53:26.910239 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.910247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:26.910253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:26.910313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:26.933448 1550381 cri.go:89] found id: ""
	I1218 01:53:26.933475 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.933484 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:26.933490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:26.933553 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:26.957855 1550381 cri.go:89] found id: ""
	I1218 01:53:26.957888 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.957897 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:26.957904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:26.957979 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:26.982293 1550381 cri.go:89] found id: ""
	I1218 01:53:26.982357 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.982373 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:26.982380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:26.982445 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:27.008361 1550381 cri.go:89] found id: ""
	I1218 01:53:27.008398 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.008408 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:27.008415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:27.008475 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:27.037587 1550381 cri.go:89] found id: ""
	I1218 01:53:27.037613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.037622 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:27.037628 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:27.037686 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:27.065312 1550381 cri.go:89] found id: ""
	I1218 01:53:27.065376 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.065401 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:27.065423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:27.065510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:27.090401 1550381 cri.go:89] found id: ""
	I1218 01:53:27.090427 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.090435 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:27.090445 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:27.090457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:27.105745 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:27.105773 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:27.166883 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:27.166902 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:27.166917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:27.192695 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:27.192732 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:27.224139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:27.224167 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:29.783401 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:29.794627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:29.794738 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:29.819835 1550381 cri.go:89] found id: ""
	I1218 01:53:29.819862 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.819872 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:29.819879 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:29.819939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:29.844881 1550381 cri.go:89] found id: ""
	I1218 01:53:29.844910 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.844919 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:29.844925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:29.844986 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:29.869995 1550381 cri.go:89] found id: ""
	I1218 01:53:29.870023 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.870032 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:29.870038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:29.870100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:29.895647 1550381 cri.go:89] found id: ""
	I1218 01:53:29.895671 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.895681 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:29.895687 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:29.895746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:29.922749 1550381 cri.go:89] found id: ""
	I1218 01:53:29.922773 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.922782 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:29.922788 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:29.922847 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:29.948026 1550381 cri.go:89] found id: ""
	I1218 01:53:29.948052 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.948061 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:29.948071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:29.948129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:29.974575 1550381 cri.go:89] found id: ""
	I1218 01:53:29.974598 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.974607 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:29.974614 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:29.974673 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:30.004723 1550381 cri.go:89] found id: ""
	I1218 01:53:30.004807 1550381 logs.go:282] 0 containers: []
	W1218 01:53:30.004831 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:30.004861 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:30.004908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:30.103939 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:30.103976 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:30.120775 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:30.120815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:30.191673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:30.191695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:30.191707 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:30.218142 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:30.218175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:32.750923 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:32.764019 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:32.764089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:32.789861 1550381 cri.go:89] found id: ""
	I1218 01:53:32.789885 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.789894 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:32.789900 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:32.789967 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:32.821480 1550381 cri.go:89] found id: ""
	I1218 01:53:32.821513 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.821525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:32.821532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:32.821601 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:32.847702 1550381 cri.go:89] found id: ""
	I1218 01:53:32.847733 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.847744 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:32.847751 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:32.847811 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:32.872820 1550381 cri.go:89] found id: ""
	I1218 01:53:32.872845 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.872855 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:32.872861 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:32.872976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:32.901902 1550381 cri.go:89] found id: ""
	I1218 01:53:32.901975 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.902012 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:32.902020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:32.902100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:32.926991 1550381 cri.go:89] found id: ""
	I1218 01:53:32.927016 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.927024 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:32.927031 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:32.927093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:32.951930 1550381 cri.go:89] found id: ""
	I1218 01:53:32.951957 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.951966 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:32.951972 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:32.952034 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:32.977838 1550381 cri.go:89] found id: ""
	I1218 01:53:32.977864 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.977874 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:32.977883 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:32.977894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:33.047486 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:33.047516 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:33.047530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:33.074046 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:33.074084 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:33.106481 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:33.106509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:33.164051 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:33.164095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:35.679393 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:35.706090 1550381 out.go:203] 
	W1218 01:53:35.709129 1550381 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1218 01:53:35.709179 1550381 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1218 01:53:35.709189 1550381 out.go:285] * Related issues:
	W1218 01:53:35.709204 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1218 01:53:35.709225 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1218 01:53:35.712031 1550381 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058634955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058646516Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058675996Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058690896Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058702449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058719162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058734998Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058749521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058766029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058797364Z" level=info msg="Connect containerd service"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.059062129Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.059621443Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.078574656Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.078669144Z" level=info msg="Start recovering state"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.079191052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.079329806Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117026802Z" level=info msg="Start event monitor"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117092737Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117103362Z" level=info msg="Start streaming server"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117113224Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117122127Z" level=info msg="runtime interface starting up..."
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117129035Z" level=info msg="starting plugins..."
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117373017Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:47:32 newest-cni-120615 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.118837196Z" level=info msg="containerd successfully booted in 0.082564s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:38.792740   13492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:38.793298   13492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:38.794817   13492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:38.795159   13492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:38.796617   13492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:53:38 up  8:36,  0 user,  load average: 0.20, 0.52, 1.13
	Linux newest-cni-120615 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:53:35 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:36 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 18 01:53:36 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:36 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:36 newest-cni-120615 kubelet[13365]: E1218 01:53:36.459335   13365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:36 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:36 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:37 newest-cni-120615 kubelet[13378]: E1218 01:53:37.242631   13378 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:37 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:38 newest-cni-120615 kubelet[13392]: E1218 01:53:38.010501   13392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:38 newest-cni-120615 kubelet[13481]: E1218 01:53:38.738256   13481 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:38 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (370.383671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-120615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (375.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:47:57.379199 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:48:07.475404 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:48:25.214851 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:50:04.395399 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:51:11.681190 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:51:43.269827 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:52:57.378850 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:53:06.334182 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:53:25.214449 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1218 01:54:45.470403 1261148 config.go:182] Loaded profile config "auto-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:55:04.395137 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:56:11.680856 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 2 (337.068457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1542592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:41:17.647711914Z",
	            "FinishedAt": "2025-12-18T01:41:16.31019941Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8868484521f3c95b5d3384207de825b735eca41ce409d5b6097489f36adbd1f",
	            "SandboxKey": "/var/run/docker/netns/a8868484521f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:c4:c7:ad:db:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "f645b66df5fb6b54a71529960c16fc0d0eda8d0c9be9273792de657fffcd9b75",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 2 (323.26177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-970975 logs -n 25: (1.004045174s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-459533 sudo systemctl cat kubelet --no-pager                                                                                           │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo journalctl -xeu kubelet --all --full --no-pager                                                                            │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cat /etc/kubernetes/kubelet.conf                                                                                           │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cat /var/lib/kubelet/config.yaml                                                                                           │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo systemctl status docker --all --full --no-pager                                                                            │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p auto-459533 sudo systemctl cat docker --no-pager                                                                                            │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cat /etc/docker/daemon.json                                                                                                │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p auto-459533 sudo docker system info                                                                                                         │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p auto-459533 sudo systemctl status cri-docker --all --full --no-pager                                                                        │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p auto-459533 sudo systemctl cat cri-docker --no-pager                                                                                        │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                   │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p auto-459533 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                             │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cri-dockerd --version                                                                                                      │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo systemctl status containerd --all --full --no-pager                                                                        │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo systemctl cat containerd --no-pager                                                                                        │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cat /lib/systemd/system/containerd.service                                                                                 │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo cat /etc/containerd/config.toml                                                                                            │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo containerd config dump                                                                                                     │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo systemctl status crio --all --full --no-pager                                                                              │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │                     │
	│ ssh     │ -p auto-459533 sudo systemctl cat crio --no-pager                                                                                              │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                    │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ ssh     │ -p auto-459533 sudo crio config                                                                                                                │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ delete  │ -p auto-459533                                                                                                                                 │ auto-459533    │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:55 UTC │
	│ start   │ -p kindnet-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd │ kindnet-459533 │ jenkins │ v1.37.0 │ 18 Dec 25 01:55 UTC │ 18 Dec 25 01:56 UTC │
	│ ssh     │ -p kindnet-459533 pgrep -a kubelet                                                                                                             │ kindnet-459533 │ jenkins │ v1.37.0 │ 18 Dec 25 01:56 UTC │ 18 Dec 25 01:56 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:55:17
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:55:17.966230 1575756 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:55:17.966405 1575756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:55:17.966418 1575756 out.go:374] Setting ErrFile to fd 2...
	I1218 01:55:17.966424 1575756 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:55:17.966702 1575756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:55:17.967148 1575756 out.go:368] Setting JSON to false
	I1218 01:55:17.968070 1575756 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":31064,"bootTime":1765991854,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:55:17.968137 1575756 start.go:143] virtualization:  
	I1218 01:55:17.971778 1575756 out.go:179] * [kindnet-459533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:55:17.976139 1575756 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:55:17.976278 1575756 notify.go:221] Checking for updates...
	I1218 01:55:17.982471 1575756 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:55:17.985539 1575756 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:55:17.988617 1575756 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:55:17.991684 1575756 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:55:17.994684 1575756 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:55:17.998267 1575756 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:55:17.998374 1575756 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:55:18.028788 1575756 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:55:18.028938 1575756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:55:18.092952 1575756 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:55:18.082732215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:55:18.093102 1575756 docker.go:319] overlay module found
	I1218 01:55:18.096383 1575756 out.go:179] * Using the docker driver based on user configuration
	I1218 01:55:18.099430 1575756 start.go:309] selected driver: docker
	I1218 01:55:18.099472 1575756 start.go:927] validating driver "docker" against <nil>
	I1218 01:55:18.099487 1575756 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:55:18.100370 1575756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:55:18.154788 1575756 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:55:18.145815294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:55:18.154945 1575756 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1218 01:55:18.155211 1575756 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 01:55:18.158267 1575756 out.go:179] * Using Docker driver with root privileges
	I1218 01:55:18.161132 1575756 cni.go:84] Creating CNI manager for "kindnet"
	I1218 01:55:18.161164 1575756 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 01:55:18.161250 1575756 start.go:353] cluster config:
	{Name:kindnet-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-459533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:55:18.164423 1575756 out.go:179] * Starting "kindnet-459533" primary control-plane node in "kindnet-459533" cluster
	I1218 01:55:18.167369 1575756 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:55:18.170378 1575756 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:55:18.173271 1575756 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 01:55:18.173326 1575756 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1218 01:55:18.173332 1575756 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:55:18.173352 1575756 cache.go:65] Caching tarball of preloaded images
	I1218 01:55:18.173433 1575756 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:55:18.173443 1575756 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1218 01:55:18.173551 1575756 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/config.json ...
	I1218 01:55:18.173577 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/config.json: {Name:mkd675d9d445cf1e5637c88e9c738bdd8795e0f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:18.193191 1575756 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:55:18.193216 1575756 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:55:18.193231 1575756 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:55:18.193265 1575756 start.go:360] acquireMachinesLock for kindnet-459533: {Name:mk5eaff0d760b0d53634e99670c92b31f149ff76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:55:18.193379 1575756 start.go:364] duration metric: took 92.371µs to acquireMachinesLock for "kindnet-459533"
	I1218 01:55:18.193411 1575756 start.go:93] Provisioning new machine with config: &{Name:kindnet-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-459533 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:55:18.193481 1575756 start.go:125] createHost starting for "" (driver="docker")
	I1218 01:55:18.196977 1575756 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 01:55:18.197222 1575756 start.go:159] libmachine.API.Create for "kindnet-459533" (driver="docker")
	I1218 01:55:18.197259 1575756 client.go:173] LocalClient.Create starting
	I1218 01:55:18.197327 1575756 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 01:55:18.197370 1575756 main.go:143] libmachine: Decoding PEM data...
	I1218 01:55:18.197389 1575756 main.go:143] libmachine: Parsing certificate...
	I1218 01:55:18.197448 1575756 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 01:55:18.197470 1575756 main.go:143] libmachine: Decoding PEM data...
	I1218 01:55:18.197481 1575756 main.go:143] libmachine: Parsing certificate...
	I1218 01:55:18.197854 1575756 cli_runner.go:164] Run: docker network inspect kindnet-459533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 01:55:18.214049 1575756 cli_runner.go:211] docker network inspect kindnet-459533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 01:55:18.214145 1575756 network_create.go:284] running [docker network inspect kindnet-459533] to gather additional debugging logs...
	I1218 01:55:18.214163 1575756 cli_runner.go:164] Run: docker network inspect kindnet-459533
	W1218 01:55:18.229999 1575756 cli_runner.go:211] docker network inspect kindnet-459533 returned with exit code 1
	I1218 01:55:18.230042 1575756 network_create.go:287] error running [docker network inspect kindnet-459533]: docker network inspect kindnet-459533: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-459533 not found
	I1218 01:55:18.230057 1575756 network_create.go:289] output of [docker network inspect kindnet-459533]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-459533 not found
	
	** /stderr **
	I1218 01:55:18.230157 1575756 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:55:18.246423 1575756 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 01:55:18.246832 1575756 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 01:55:18.247099 1575756 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 01:55:18.247420 1575756 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 01:55:18.247844 1575756 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196b8e0}
	I1218 01:55:18.247867 1575756 network_create.go:124] attempt to create docker network kindnet-459533 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 01:55:18.247926 1575756 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-459533 kindnet-459533
	I1218 01:55:18.306773 1575756 network_create.go:108] docker network kindnet-459533 192.168.85.0/24 created
	I1218 01:55:18.306806 1575756 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-459533" container
	I1218 01:55:18.306897 1575756 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 01:55:18.323753 1575756 cli_runner.go:164] Run: docker volume create kindnet-459533 --label name.minikube.sigs.k8s.io=kindnet-459533 --label created_by.minikube.sigs.k8s.io=true
	I1218 01:55:18.341634 1575756 oci.go:103] Successfully created a docker volume kindnet-459533
	I1218 01:55:18.341728 1575756 cli_runner.go:164] Run: docker run --rm --name kindnet-459533-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-459533 --entrypoint /usr/bin/test -v kindnet-459533:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 01:55:18.882981 1575756 oci.go:107] Successfully prepared a docker volume kindnet-459533
	I1218 01:55:18.883052 1575756 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 01:55:18.883064 1575756 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 01:55:18.883148 1575756 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-459533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 01:55:22.865127 1575756 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-459533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.981938735s)
	I1218 01:55:22.865170 1575756 kic.go:203] duration metric: took 3.982100209s to extract preloaded images to volume ...
	W1218 01:55:22.865306 1575756 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 01:55:22.865424 1575756 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 01:55:22.925726 1575756 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-459533 --name kindnet-459533 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-459533 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-459533 --network kindnet-459533 --ip 192.168.85.2 --volume kindnet-459533:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 01:55:23.228281 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Running}}
	I1218 01:55:23.249142 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Status}}
	I1218 01:55:23.270331 1575756 cli_runner.go:164] Run: docker exec kindnet-459533 stat /var/lib/dpkg/alternatives/iptables
	I1218 01:55:23.326220 1575756 oci.go:144] the created container "kindnet-459533" has a running status.
	I1218 01:55:23.326248 1575756 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa...
	I1218 01:55:23.516861 1575756 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 01:55:23.546631 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Status}}
	I1218 01:55:23.583242 1575756 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 01:55:23.583262 1575756 kic_runner.go:114] Args: [docker exec --privileged kindnet-459533 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 01:55:23.644164 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Status}}
	I1218 01:55:23.671401 1575756 machine.go:94] provisionDockerMachine start ...
	I1218 01:55:23.671502 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:23.698907 1575756 main.go:143] libmachine: Using SSH client type: native
	I1218 01:55:23.699595 1575756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1218 01:55:23.699617 1575756 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:55:23.700650 1575756 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1218 01:55:26.856142 1575756 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-459533
	
	I1218 01:55:26.856169 1575756 ubuntu.go:182] provisioning hostname "kindnet-459533"
	I1218 01:55:26.856242 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:26.873411 1575756 main.go:143] libmachine: Using SSH client type: native
	I1218 01:55:26.873731 1575756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1218 01:55:26.873747 1575756 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-459533 && echo "kindnet-459533" | sudo tee /etc/hostname
	I1218 01:55:27.034089 1575756 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-459533
	
	I1218 01:55:27.034181 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:27.051897 1575756 main.go:143] libmachine: Using SSH client type: native
	I1218 01:55:27.052222 1575756 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1218 01:55:27.052242 1575756 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-459533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-459533/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-459533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:55:27.208924 1575756 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:55:27.208948 1575756 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:55:27.208979 1575756 ubuntu.go:190] setting up certificates
	I1218 01:55:27.208988 1575756 provision.go:84] configureAuth start
	I1218 01:55:27.209060 1575756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-459533
	I1218 01:55:27.226859 1575756 provision.go:143] copyHostCerts
	I1218 01:55:27.226931 1575756 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:55:27.226946 1575756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:55:27.227027 1575756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:55:27.227136 1575756 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:55:27.227147 1575756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:55:27.227177 1575756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:55:27.227242 1575756 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:55:27.227252 1575756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:55:27.227280 1575756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:55:27.227347 1575756 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.kindnet-459533 san=[127.0.0.1 192.168.85.2 kindnet-459533 localhost minikube]
	I1218 01:55:27.398821 1575756 provision.go:177] copyRemoteCerts
	I1218 01:55:27.398919 1575756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:55:27.398964 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:27.417520 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:27.524901 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1218 01:55:27.543691 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 01:55:27.561579 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:55:27.578896 1575756 provision.go:87] duration metric: took 369.882369ms to configureAuth
	I1218 01:55:27.578936 1575756 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:55:27.579166 1575756 config.go:182] Loaded profile config "kindnet-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:55:27.579180 1575756 machine.go:97] duration metric: took 3.907758046s to provisionDockerMachine
	I1218 01:55:27.579187 1575756 client.go:176] duration metric: took 9.381916967s to LocalClient.Create
	I1218 01:55:27.579208 1575756 start.go:167] duration metric: took 9.381987292s to libmachine.API.Create "kindnet-459533"
	I1218 01:55:27.579216 1575756 start.go:293] postStartSetup for "kindnet-459533" (driver="docker")
	I1218 01:55:27.579224 1575756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:55:27.579281 1575756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:55:27.579329 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:27.596398 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:27.704980 1575756 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:55:27.708372 1575756 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:55:27.708402 1575756 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:55:27.708414 1575756 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:55:27.708467 1575756 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:55:27.708554 1575756 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:55:27.708685 1575756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:55:27.716116 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:55:27.734121 1575756 start.go:296] duration metric: took 154.890904ms for postStartSetup
	I1218 01:55:27.734543 1575756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-459533
	I1218 01:55:27.750917 1575756 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/config.json ...
	I1218 01:55:27.751215 1575756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:55:27.751270 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:27.769480 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:27.873812 1575756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:55:27.878379 1575756 start.go:128] duration metric: took 9.684883538s to createHost
	I1218 01:55:27.878406 1575756 start.go:83] releasing machines lock for "kindnet-459533", held for 9.685010911s
	I1218 01:55:27.878476 1575756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-459533
	I1218 01:55:27.895184 1575756 ssh_runner.go:195] Run: cat /version.json
	I1218 01:55:27.895258 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:27.895521 1575756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:55:27.895586 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:27.912970 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:27.924857 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:28.017190 1575756 ssh_runner.go:195] Run: systemctl --version
	I1218 01:55:28.113247 1575756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:55:28.118127 1575756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:55:28.118221 1575756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:55:28.148395 1575756 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 01:55:28.148435 1575756 start.go:496] detecting cgroup driver to use...
	I1218 01:55:28.148487 1575756 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:55:28.148568 1575756 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:55:28.170957 1575756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:55:28.193631 1575756 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:55:28.193740 1575756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:55:28.213953 1575756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:55:28.237835 1575756 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:55:28.361353 1575756 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:55:28.483053 1575756 docker.go:234] disabling docker service ...
	I1218 01:55:28.483118 1575756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:55:28.505215 1575756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:55:28.519550 1575756 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:55:28.631648 1575756 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:55:28.747712 1575756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:55:28.760786 1575756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:55:28.776334 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:55:28.786193 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:55:28.794974 1575756 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:55:28.795071 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:55:28.804054 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:55:28.813339 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:55:28.822319 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:55:28.832254 1575756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:55:28.840999 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:55:28.850166 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:55:28.859162 1575756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:55:28.868249 1575756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:55:28.876668 1575756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:55:28.884227 1575756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:55:29.022673 1575756 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:55:29.161980 1575756 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:55:29.162070 1575756 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:55:29.166137 1575756 start.go:564] Will wait 60s for crictl version
	I1218 01:55:29.166255 1575756 ssh_runner.go:195] Run: which crictl
	I1218 01:55:29.170222 1575756 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:55:29.193381 1575756 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:55:29.193460 1575756 ssh_runner.go:195] Run: containerd --version
	I1218 01:55:29.214798 1575756 ssh_runner.go:195] Run: containerd --version
	I1218 01:55:29.240418 1575756 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1218 01:55:29.243408 1575756 cli_runner.go:164] Run: docker network inspect kindnet-459533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:55:29.260268 1575756 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:55:29.264237 1575756 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:55:29.274736 1575756 kubeadm.go:884] updating cluster {Name:kindnet-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-459533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:55:29.274856 1575756 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 01:55:29.274928 1575756 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:55:29.300181 1575756 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:55:29.300206 1575756 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:55:29.300269 1575756 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:55:29.325314 1575756 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:55:29.325338 1575756 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:55:29.325345 1575756 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 containerd true true} ...
	I1218 01:55:29.325447 1575756 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-459533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-459533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1218 01:55:29.325521 1575756 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:55:29.351323 1575756 cni.go:84] Creating CNI manager for "kindnet"
	I1218 01:55:29.351357 1575756 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 01:55:29.351403 1575756 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-459533 NodeName:kindnet-459533 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:55:29.351545 1575756 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-459533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:55:29.351652 1575756 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1218 01:55:29.359662 1575756 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:55:29.359763 1575756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:55:29.367854 1575756 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1218 01:55:29.381875 1575756 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 01:55:29.395528 1575756 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1218 01:55:29.409373 1575756 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:55:29.413115 1575756 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:55:29.423199 1575756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:55:29.542795 1575756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:55:29.559699 1575756 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533 for IP: 192.168.85.2
	I1218 01:55:29.559718 1575756 certs.go:195] generating shared ca certs ...
	I1218 01:55:29.559733 1575756 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:29.559900 1575756 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:55:29.559950 1575756 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:55:29.559957 1575756 certs.go:257] generating profile certs ...
	I1218 01:55:29.560008 1575756 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.key
	I1218 01:55:29.560020 1575756 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt with IP's: []
	I1218 01:55:30.130230 1575756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt ...
	I1218 01:55:30.130267 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: {Name:mkaa12405ad902f2d53d662d4ff08b1d72cd908c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:30.130514 1575756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.key ...
	I1218 01:55:30.130533 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.key: {Name:mk39282da1a212fc37b8da29f00e588e0094e1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:30.130640 1575756 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.key.08e39a9a
	I1218 01:55:30.130658 1575756 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.crt.08e39a9a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 01:55:30.512061 1575756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.crt.08e39a9a ...
	I1218 01:55:30.512095 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.crt.08e39a9a: {Name:mk5df6b544707266fdd5adce6232399f07ead84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:30.512285 1575756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.key.08e39a9a ...
	I1218 01:55:30.512300 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.key.08e39a9a: {Name:mk94edf53bc7203d0c0639c29fdc22ab555b4374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:30.512395 1575756 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.crt.08e39a9a -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.crt
	I1218 01:55:30.512481 1575756 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.key.08e39a9a -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.key
	I1218 01:55:30.512543 1575756 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.key
	I1218 01:55:30.512562 1575756 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.crt with IP's: []
	I1218 01:55:31.576879 1575756 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.crt ...
	I1218 01:55:31.576913 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.crt: {Name:mk1d0cb0c2044cb108d0f105009d5f07cf355277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:31.577105 1575756 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.key ...
	I1218 01:55:31.577119 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.key: {Name:mke380ba677bc48e1f343fcbb35709decf6f5a2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:31.577325 1575756 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:55:31.577371 1575756 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:55:31.577385 1575756 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:55:31.577412 1575756 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:55:31.577444 1575756 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:55:31.577473 1575756 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:55:31.577523 1575756 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:55:31.578136 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:55:31.597274 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:55:31.617513 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:55:31.637368 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:55:31.656550 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1218 01:55:31.675436 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:55:31.694259 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:55:31.712278 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:55:31.730346 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:55:31.748683 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:55:31.767403 1575756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:55:31.785940 1575756 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:55:31.799130 1575756 ssh_runner.go:195] Run: openssl version
	I1218 01:55:31.805642 1575756 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:55:31.813147 1575756 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:55:31.820638 1575756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:55:31.824349 1575756 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:55:31.824444 1575756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:55:31.866394 1575756 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:55:31.874136 1575756 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 01:55:31.881782 1575756 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:55:31.889642 1575756 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:55:31.900176 1575756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:55:31.904801 1575756 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:55:31.904866 1575756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:55:31.949662 1575756 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:55:31.959123 1575756 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 01:55:31.966658 1575756 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:55:31.974413 1575756 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:55:31.982143 1575756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:55:31.986305 1575756 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:55:31.986370 1575756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:55:32.028711 1575756 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:55:32.036683 1575756 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 01:55:32.044447 1575756 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:55:32.048196 1575756 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 01:55:32.048254 1575756 kubeadm.go:401] StartCluster: {Name:kindnet-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-459533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:55:32.048328 1575756 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:55:32.048395 1575756 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:55:32.079689 1575756 cri.go:89] found id: ""
	I1218 01:55:32.079765 1575756 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:55:32.088689 1575756 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 01:55:32.097706 1575756 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 01:55:32.097778 1575756 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 01:55:32.105983 1575756 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 01:55:32.106003 1575756 kubeadm.go:158] found existing configuration files:
	
	I1218 01:55:32.106089 1575756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 01:55:32.114231 1575756 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 01:55:32.114316 1575756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 01:55:32.122231 1575756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 01:55:32.130539 1575756 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 01:55:32.130650 1575756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 01:55:32.138651 1575756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 01:55:32.148895 1575756 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 01:55:32.148961 1575756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 01:55:32.156735 1575756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 01:55:32.165640 1575756 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 01:55:32.165725 1575756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 01:55:32.173025 1575756 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 01:55:32.240131 1575756 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1218 01:55:32.240368 1575756 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 01:55:32.305224 1575756 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 01:55:50.860919 1575756 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1218 01:55:50.860981 1575756 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 01:55:50.861081 1575756 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 01:55:50.861166 1575756 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 01:55:50.861231 1575756 kubeadm.go:319] OS: Linux
	I1218 01:55:50.861278 1575756 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 01:55:50.861333 1575756 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 01:55:50.861381 1575756 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 01:55:50.861441 1575756 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 01:55:50.861495 1575756 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 01:55:50.861549 1575756 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 01:55:50.861715 1575756 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 01:55:50.861797 1575756 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 01:55:50.861856 1575756 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 01:55:50.861943 1575756 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 01:55:50.862052 1575756 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 01:55:50.862147 1575756 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 01:55:50.862213 1575756 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 01:55:50.865128 1575756 out.go:252]   - Generating certificates and keys ...
	I1218 01:55:50.865224 1575756 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 01:55:50.865289 1575756 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 01:55:50.865355 1575756 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 01:55:50.865411 1575756 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 01:55:50.865472 1575756 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 01:55:50.865522 1575756 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 01:55:50.865576 1575756 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 01:55:50.865705 1575756 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-459533 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:55:50.865764 1575756 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 01:55:50.865883 1575756 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-459533 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 01:55:50.865954 1575756 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 01:55:50.866027 1575756 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 01:55:50.866079 1575756 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 01:55:50.866135 1575756 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 01:55:50.866186 1575756 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 01:55:50.866241 1575756 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 01:55:50.866296 1575756 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 01:55:50.866358 1575756 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 01:55:50.866412 1575756 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 01:55:50.866493 1575756 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 01:55:50.866559 1575756 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 01:55:50.869822 1575756 out.go:252]   - Booting up control plane ...
	I1218 01:55:50.870092 1575756 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 01:55:50.870203 1575756 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 01:55:50.870375 1575756 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 01:55:50.870580 1575756 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 01:55:50.870734 1575756 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 01:55:50.870855 1575756 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 01:55:50.870946 1575756 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 01:55:50.870986 1575756 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 01:55:50.871128 1575756 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 01:55:50.871249 1575756 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 01:55:50.871315 1575756 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501796514s
	I1218 01:55:50.871447 1575756 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1218 01:55:50.871559 1575756 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1218 01:55:50.871652 1575756 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1218 01:55:50.871731 1575756 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1218 01:55:50.871808 1575756 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.111507429s
	I1218 01:55:50.871876 1575756 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003166972s
	I1218 01:55:50.871988 1575756 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.525973351s
	I1218 01:55:50.872109 1575756 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 01:55:50.872237 1575756 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 01:55:50.872306 1575756 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 01:55:50.872570 1575756 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-459533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 01:55:50.872670 1575756 kubeadm.go:319] [bootstrap-token] Using token: kffr7x.6tz7q7pic5z3tsnm
	I1218 01:55:50.875741 1575756 out.go:252]   - Configuring RBAC rules ...
	I1218 01:55:50.875915 1575756 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 01:55:50.876022 1575756 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 01:55:50.876216 1575756 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 01:55:50.876385 1575756 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 01:55:50.876544 1575756 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 01:55:50.876831 1575756 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 01:55:50.877046 1575756 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 01:55:50.877114 1575756 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1218 01:55:50.877162 1575756 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1218 01:55:50.877166 1575756 kubeadm.go:319] 
	I1218 01:55:50.877235 1575756 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1218 01:55:50.877240 1575756 kubeadm.go:319] 
	I1218 01:55:50.877317 1575756 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1218 01:55:50.877320 1575756 kubeadm.go:319] 
	I1218 01:55:50.877346 1575756 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1218 01:55:50.877404 1575756 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 01:55:50.877454 1575756 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 01:55:50.877462 1575756 kubeadm.go:319] 
	I1218 01:55:50.877515 1575756 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1218 01:55:50.877519 1575756 kubeadm.go:319] 
	I1218 01:55:50.877566 1575756 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 01:55:50.877570 1575756 kubeadm.go:319] 
	I1218 01:55:50.877622 1575756 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1218 01:55:50.877703 1575756 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 01:55:50.877772 1575756 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 01:55:50.877775 1575756 kubeadm.go:319] 
	I1218 01:55:50.877859 1575756 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 01:55:50.877935 1575756 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1218 01:55:50.877939 1575756 kubeadm.go:319] 
	I1218 01:55:50.878022 1575756 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kffr7x.6tz7q7pic5z3tsnm \
	I1218 01:55:50.878125 1575756 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b4077e98a4859192b0456bf3327d2197d85ea7f70e768b14f3ff5e295e626e \
	I1218 01:55:50.878147 1575756 kubeadm.go:319] 	--control-plane 
	I1218 01:55:50.878150 1575756 kubeadm.go:319] 
	I1218 01:55:50.878237 1575756 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1218 01:55:50.878240 1575756 kubeadm.go:319] 
	I1218 01:55:50.878322 1575756 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kffr7x.6tz7q7pic5z3tsnm \
	I1218 01:55:50.878439 1575756 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b4077e98a4859192b0456bf3327d2197d85ea7f70e768b14f3ff5e295e626e 
	I1218 01:55:50.878447 1575756 cni.go:84] Creating CNI manager for "kindnet"
	I1218 01:55:50.881608 1575756 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1218 01:55:50.884407 1575756 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 01:55:50.889115 1575756 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1218 01:55:50.889143 1575756 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1218 01:55:50.904863 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 01:55:51.220537 1575756 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 01:55:51.220737 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:51.220784 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-459533 minikube.k8s.io/updated_at=2025_12_18T01_55_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=kindnet-459533 minikube.k8s.io/primary=true
	I1218 01:55:51.332808 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:51.332884 1575756 ops.go:34] apiserver oom_adj: -16
	I1218 01:55:51.833472 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:52.333363 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:52.833470 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:53.333359 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:53.833513 1575756 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 01:55:53.945167 1575756 kubeadm.go:1114] duration metric: took 2.724539739s to wait for elevateKubeSystemPrivileges
	I1218 01:55:53.945256 1575756 kubeadm.go:403] duration metric: took 21.897005251s to StartCluster
	I1218 01:55:53.945287 1575756 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:53.945400 1575756 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:55:53.946433 1575756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:55:53.946770 1575756 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 01:55:53.947027 1575756 config.go:182] Loaded profile config "kindnet-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:55:53.947099 1575756 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:55:53.947223 1575756 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:55:53.947396 1575756 addons.go:70] Setting storage-provisioner=true in profile "kindnet-459533"
	I1218 01:55:53.947415 1575756 addons.go:239] Setting addon storage-provisioner=true in "kindnet-459533"
	I1218 01:55:53.947437 1575756 host.go:66] Checking if "kindnet-459533" exists ...
	I1218 01:55:53.948016 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Status}}
	I1218 01:55:53.948211 1575756 addons.go:70] Setting default-storageclass=true in profile "kindnet-459533"
	I1218 01:55:53.948248 1575756 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-459533"
	I1218 01:55:53.948562 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Status}}
	I1218 01:55:53.951747 1575756 out.go:179] * Verifying Kubernetes components...
	I1218 01:55:53.956319 1575756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:55:53.990169 1575756 addons.go:239] Setting addon default-storageclass=true in "kindnet-459533"
	I1218 01:55:53.990224 1575756 host.go:66] Checking if "kindnet-459533" exists ...
	I1218 01:55:53.990767 1575756 cli_runner.go:164] Run: docker container inspect kindnet-459533 --format={{.State.Status}}
	I1218 01:55:54.001612 1575756 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:55:54.005951 1575756 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:55:54.005987 1575756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:55:54.006061 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:54.022829 1575756 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:55:54.022861 1575756 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:55:54.022945 1575756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-459533
	I1218 01:55:54.042166 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:54.064526 1575756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/kindnet-459533/id_rsa Username:docker}
	I1218 01:55:54.229952 1575756 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 01:55:54.230063 1575756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:55:54.314693 1575756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:55:54.337069 1575756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:55:54.554672 1575756 node_ready.go:35] waiting up to 15m0s for node "kindnet-459533" to be "Ready" ...
	I1218 01:55:54.554969 1575756 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1218 01:55:55.067095 1575756 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-459533" context rescaled to 1 replicas
	I1218 01:55:55.144865 1575756 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1218 01:55:55.147800 1575756 addons.go:530] duration metric: took 1.20057297s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1218 01:55:56.558238 1575756 node_ready.go:57] node "kindnet-459533" has "Ready":"False" status (will retry)
	W1218 01:55:59.057265 1575756 node_ready.go:57] node "kindnet-459533" has "Ready":"False" status (will retry)
	W1218 01:56:01.057532 1575756 node_ready.go:57] node "kindnet-459533" has "Ready":"False" status (will retry)
	W1218 01:56:03.058563 1575756 node_ready.go:57] node "kindnet-459533" has "Ready":"False" status (will retry)
	W1218 01:56:05.557801 1575756 node_ready.go:57] node "kindnet-459533" has "Ready":"False" status (will retry)
	I1218 01:56:07.557504 1575756 node_ready.go:49] node "kindnet-459533" is "Ready"
	I1218 01:56:07.557586 1575756 node_ready.go:38] duration metric: took 13.002878268s for node "kindnet-459533" to be "Ready" ...
	I1218 01:56:07.557635 1575756 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:56:07.557741 1575756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:56:07.570454 1575756 api_server.go:72] duration metric: took 13.62310928s to wait for apiserver process to appear ...
	I1218 01:56:07.570522 1575756 api_server.go:88] waiting for apiserver healthz status ...
	I1218 01:56:07.570548 1575756 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1218 01:56:07.579771 1575756 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1218 01:56:07.580893 1575756 api_server.go:141] control plane version: v1.34.3
	I1218 01:56:07.580918 1575756 api_server.go:131] duration metric: took 10.383682ms to wait for apiserver health ...
	I1218 01:56:07.580940 1575756 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 01:56:07.584510 1575756 system_pods.go:59] 8 kube-system pods found
	I1218 01:56:07.584547 1575756 system_pods.go:61] "coredns-66bc5c9577-nfsq7" [68322084-c61d-497e-9800-0ed293a45bee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 01:56:07.584555 1575756 system_pods.go:61] "etcd-kindnet-459533" [098072aa-d987-474d-ab51-f703fd226e96] Running
	I1218 01:56:07.584560 1575756 system_pods.go:61] "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
	I1218 01:56:07.584565 1575756 system_pods.go:61] "kube-apiserver-kindnet-459533" [d7945c27-f2fd-4b95-9019-7b54f40f3af5] Running
	I1218 01:56:07.584573 1575756 system_pods.go:61] "kube-controller-manager-kindnet-459533" [97f1d2c4-b55d-44a1-9de3-b93cd47e8daf] Running
	I1218 01:56:07.584578 1575756 system_pods.go:61] "kube-proxy-6mz7w" [4cfe98a7-b33a-4ca6-8c4d-5ce793200206] Running
	I1218 01:56:07.584582 1575756 system_pods.go:61] "kube-scheduler-kindnet-459533" [e92f7efb-cf3c-4d33-a575-21523f653213] Running
	I1218 01:56:07.584587 1575756 system_pods.go:61] "storage-provisioner" [bdd82d40-cc63-479c-ba92-b02c98a3b20b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 01:56:07.584601 1575756 system_pods.go:74] duration metric: took 3.651514ms to wait for pod list to return data ...
	I1218 01:56:07.584610 1575756 default_sa.go:34] waiting for default service account to be created ...
	I1218 01:56:07.587348 1575756 default_sa.go:45] found service account: "default"
	I1218 01:56:07.587374 1575756 default_sa.go:55] duration metric: took 2.756387ms for default service account to be created ...
	I1218 01:56:07.587384 1575756 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 01:56:07.594549 1575756 system_pods.go:86] 8 kube-system pods found
	I1218 01:56:07.594587 1575756 system_pods.go:89] "coredns-66bc5c9577-nfsq7" [68322084-c61d-497e-9800-0ed293a45bee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 01:56:07.594594 1575756 system_pods.go:89] "etcd-kindnet-459533" [098072aa-d987-474d-ab51-f703fd226e96] Running
	I1218 01:56:07.594623 1575756 system_pods.go:89] "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
	I1218 01:56:07.594634 1575756 system_pods.go:89] "kube-apiserver-kindnet-459533" [d7945c27-f2fd-4b95-9019-7b54f40f3af5] Running
	I1218 01:56:07.594639 1575756 system_pods.go:89] "kube-controller-manager-kindnet-459533" [97f1d2c4-b55d-44a1-9de3-b93cd47e8daf] Running
	I1218 01:56:07.594646 1575756 system_pods.go:89] "kube-proxy-6mz7w" [4cfe98a7-b33a-4ca6-8c4d-5ce793200206] Running
	I1218 01:56:07.594653 1575756 system_pods.go:89] "kube-scheduler-kindnet-459533" [e92f7efb-cf3c-4d33-a575-21523f653213] Running
	I1218 01:56:07.594659 1575756 system_pods.go:89] "storage-provisioner" [bdd82d40-cc63-479c-ba92-b02c98a3b20b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 01:56:07.594698 1575756 retry.go:31] will retry after 264.528319ms: missing components: kube-dns
	I1218 01:56:07.869080 1575756 system_pods.go:86] 8 kube-system pods found
	I1218 01:56:07.869122 1575756 system_pods.go:89] "coredns-66bc5c9577-nfsq7" [68322084-c61d-497e-9800-0ed293a45bee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 01:56:07.869130 1575756 system_pods.go:89] "etcd-kindnet-459533" [098072aa-d987-474d-ab51-f703fd226e96] Running
	I1218 01:56:07.869159 1575756 system_pods.go:89] "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
	I1218 01:56:07.869165 1575756 system_pods.go:89] "kube-apiserver-kindnet-459533" [d7945c27-f2fd-4b95-9019-7b54f40f3af5] Running
	I1218 01:56:07.869175 1575756 system_pods.go:89] "kube-controller-manager-kindnet-459533" [97f1d2c4-b55d-44a1-9de3-b93cd47e8daf] Running
	I1218 01:56:07.869181 1575756 system_pods.go:89] "kube-proxy-6mz7w" [4cfe98a7-b33a-4ca6-8c4d-5ce793200206] Running
	I1218 01:56:07.869193 1575756 system_pods.go:89] "kube-scheduler-kindnet-459533" [e92f7efb-cf3c-4d33-a575-21523f653213] Running
	I1218 01:56:07.869200 1575756 system_pods.go:89] "storage-provisioner" [bdd82d40-cc63-479c-ba92-b02c98a3b20b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 01:56:07.869214 1575756 retry.go:31] will retry after 235.513821ms: missing components: kube-dns
	I1218 01:56:08.109834 1575756 system_pods.go:86] 8 kube-system pods found
	I1218 01:56:08.109870 1575756 system_pods.go:89] "coredns-66bc5c9577-nfsq7" [68322084-c61d-497e-9800-0ed293a45bee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 01:56:08.109878 1575756 system_pods.go:89] "etcd-kindnet-459533" [098072aa-d987-474d-ab51-f703fd226e96] Running
	I1218 01:56:08.109885 1575756 system_pods.go:89] "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
	I1218 01:56:08.109889 1575756 system_pods.go:89] "kube-apiserver-kindnet-459533" [d7945c27-f2fd-4b95-9019-7b54f40f3af5] Running
	I1218 01:56:08.109894 1575756 system_pods.go:89] "kube-controller-manager-kindnet-459533" [97f1d2c4-b55d-44a1-9de3-b93cd47e8daf] Running
	I1218 01:56:08.109900 1575756 system_pods.go:89] "kube-proxy-6mz7w" [4cfe98a7-b33a-4ca6-8c4d-5ce793200206] Running
	I1218 01:56:08.109904 1575756 system_pods.go:89] "kube-scheduler-kindnet-459533" [e92f7efb-cf3c-4d33-a575-21523f653213] Running
	I1218 01:56:08.109910 1575756 system_pods.go:89] "storage-provisioner" [bdd82d40-cc63-479c-ba92-b02c98a3b20b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 01:56:08.109933 1575756 retry.go:31] will retry after 338.620398ms: missing components: kube-dns
	I1218 01:56:08.452562 1575756 system_pods.go:86] 8 kube-system pods found
	I1218 01:56:08.452602 1575756 system_pods.go:89] "coredns-66bc5c9577-nfsq7" [68322084-c61d-497e-9800-0ed293a45bee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 01:56:08.452609 1575756 system_pods.go:89] "etcd-kindnet-459533" [098072aa-d987-474d-ab51-f703fd226e96] Running
	I1218 01:56:08.452617 1575756 system_pods.go:89] "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
	I1218 01:56:08.452655 1575756 system_pods.go:89] "kube-apiserver-kindnet-459533" [d7945c27-f2fd-4b95-9019-7b54f40f3af5] Running
	I1218 01:56:08.452661 1575756 system_pods.go:89] "kube-controller-manager-kindnet-459533" [97f1d2c4-b55d-44a1-9de3-b93cd47e8daf] Running
	I1218 01:56:08.452672 1575756 system_pods.go:89] "kube-proxy-6mz7w" [4cfe98a7-b33a-4ca6-8c4d-5ce793200206] Running
	I1218 01:56:08.452678 1575756 system_pods.go:89] "kube-scheduler-kindnet-459533" [e92f7efb-cf3c-4d33-a575-21523f653213] Running
	I1218 01:56:08.452696 1575756 system_pods.go:89] "storage-provisioner" [bdd82d40-cc63-479c-ba92-b02c98a3b20b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 01:56:08.452714 1575756 retry.go:31] will retry after 434.994917ms: missing components: kube-dns
	I1218 01:56:08.895251 1575756 system_pods.go:86] 8 kube-system pods found
	I1218 01:56:08.895284 1575756 system_pods.go:89] "coredns-66bc5c9577-nfsq7" [68322084-c61d-497e-9800-0ed293a45bee] Running
	I1218 01:56:08.895292 1575756 system_pods.go:89] "etcd-kindnet-459533" [098072aa-d987-474d-ab51-f703fd226e96] Running
	I1218 01:56:08.895296 1575756 system_pods.go:89] "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
	I1218 01:56:08.895301 1575756 system_pods.go:89] "kube-apiserver-kindnet-459533" [d7945c27-f2fd-4b95-9019-7b54f40f3af5] Running
	I1218 01:56:08.895305 1575756 system_pods.go:89] "kube-controller-manager-kindnet-459533" [97f1d2c4-b55d-44a1-9de3-b93cd47e8daf] Running
	I1218 01:56:08.895309 1575756 system_pods.go:89] "kube-proxy-6mz7w" [4cfe98a7-b33a-4ca6-8c4d-5ce793200206] Running
	I1218 01:56:08.895314 1575756 system_pods.go:89] "kube-scheduler-kindnet-459533" [e92f7efb-cf3c-4d33-a575-21523f653213] Running
	I1218 01:56:08.895318 1575756 system_pods.go:89] "storage-provisioner" [bdd82d40-cc63-479c-ba92-b02c98a3b20b] Running
	I1218 01:56:08.895327 1575756 system_pods.go:126] duration metric: took 1.3079163s to wait for k8s-apps to be running ...
	I1218 01:56:08.895334 1575756 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 01:56:08.895388 1575756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:56:08.910541 1575756 system_svc.go:56] duration metric: took 15.19668ms WaitForService to wait for kubelet
	I1218 01:56:08.910629 1575756 kubeadm.go:587] duration metric: took 14.9632962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 01:56:08.910663 1575756 node_conditions.go:102] verifying NodePressure condition ...
	I1218 01:56:08.914472 1575756 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 01:56:08.914550 1575756 node_conditions.go:123] node cpu capacity is 2
	I1218 01:56:08.914576 1575756 node_conditions.go:105] duration metric: took 3.894733ms to run NodePressure ...
	I1218 01:56:08.914602 1575756 start.go:242] waiting for startup goroutines ...
	I1218 01:56:08.914638 1575756 start.go:247] waiting for cluster config update ...
	I1218 01:56:08.914664 1575756 start.go:256] writing updated cluster config ...
	I1218 01:56:08.914998 1575756 ssh_runner.go:195] Run: rm -f paused
	I1218 01:56:08.919268 1575756 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1218 01:56:08.923565 1575756 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nfsq7" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:08.929694 1575756 pod_ready.go:94] pod "coredns-66bc5c9577-nfsq7" is "Ready"
	I1218 01:56:08.929777 1575756 pod_ready.go:86] duration metric: took 6.133183ms for pod "coredns-66bc5c9577-nfsq7" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:08.932735 1575756 pod_ready.go:83] waiting for pod "etcd-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:08.938812 1575756 pod_ready.go:94] pod "etcd-kindnet-459533" is "Ready"
	I1218 01:56:08.938886 1575756 pod_ready.go:86] duration metric: took 6.076092ms for pod "etcd-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:08.941792 1575756 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:08.947404 1575756 pod_ready.go:94] pod "kube-apiserver-kindnet-459533" is "Ready"
	I1218 01:56:08.947482 1575756 pod_ready.go:86] duration metric: took 5.617707ms for pod "kube-apiserver-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:08.950375 1575756 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:09.323407 1575756 pod_ready.go:94] pod "kube-controller-manager-kindnet-459533" is "Ready"
	I1218 01:56:09.323436 1575756 pod_ready.go:86] duration metric: took 372.990613ms for pod "kube-controller-manager-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:09.523482 1575756 pod_ready.go:83] waiting for pod "kube-proxy-6mz7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:09.923687 1575756 pod_ready.go:94] pod "kube-proxy-6mz7w" is "Ready"
	I1218 01:56:09.923716 1575756 pod_ready.go:86] duration metric: took 400.205228ms for pod "kube-proxy-6mz7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:10.124377 1575756 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:10.523601 1575756 pod_ready.go:94] pod "kube-scheduler-kindnet-459533" is "Ready"
	I1218 01:56:10.523630 1575756 pod_ready.go:86] duration metric: took 399.215677ms for pod "kube-scheduler-kindnet-459533" in "kube-system" namespace to be "Ready" or be gone ...
	I1218 01:56:10.523645 1575756 pod_ready.go:40] duration metric: took 1.604293475s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1218 01:56:10.586241 1575756 start.go:625] kubectl: 1.33.2, cluster: 1.34.3 (minor skew: 1)
	I1218 01:56:10.589706 1575756 out.go:179] * Done! kubectl is now configured to use "kindnet-459533" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343365892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343381514Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343418092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343433542Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343443264Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343454948Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343463957Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343476125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343492305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343522483Z" level=info msg="Connect containerd service"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343787182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.344338751Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359530690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359745094Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359671930Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.365773580Z" level=info msg="Start recovering state"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383747116Z" level=info msg="Start event monitor"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383803385Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383814093Z" level=info msg="Start streaming server"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383824997Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383833907Z" level=info msg="runtime interface starting up..."
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383841612Z" level=info msg="starting plugins..."
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383874005Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:41:23 no-preload-970975 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.385843444Z" level=info msg="containerd successfully booted in 0.065726s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:56:28.981581    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:56:28.982589    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:56:28.984211    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:56:28.984524    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:56:28.985982    8071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:56:29 up  8:38,  0 user,  load average: 2.43, 1.59, 1.44
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:56:25 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:56:26 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 18 01:56:26 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:26 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:26 no-preload-970975 kubelet[7930]: E1218 01:56:26.197410    7930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:56:26 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:56:26 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:56:26 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 18 01:56:26 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:26 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:26 no-preload-970975 kubelet[7936]: E1218 01:56:26.940580    7936 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:56:26 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:56:26 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:56:27 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 18 01:56:27 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:27 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:27 no-preload-970975 kubelet[7955]: E1218 01:56:27.706924    7955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:56:27 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:56:27 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:56:28 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 18 01:56:28 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:28 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:56:28 no-preload-970975 kubelet[7985]: E1218 01:56:28.486423    7985 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:56:28 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:56:28 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 2 (496.704684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-120615 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (325.592438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-120615 -n newest-cni-120615
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (341.561742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-120615 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (318.275753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-120615 -n newest-cni-120615
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (321.41216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-120615
helpers_test.go:244: (dbg) docker inspect newest-cni-120615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	        "Created": "2025-12-18T01:37:46.267734033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1550552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:47:25.795117457Z",
	            "FinishedAt": "2025-12-18T01:47:24.299442993Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hosts",
	        "LogPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1-json.log",
	        "Name": "/newest-cni-120615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-120615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-120615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	                "LowerDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-120615",
	                "Source": "/var/lib/docker/volumes/newest-cni-120615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-120615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-120615",
	                "name.minikube.sigs.k8s.io": "newest-cni-120615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03d6121fa7465afe54c6849e5d9912cbd0edd591438a044dd295828487da20b2",
	            "SandboxKey": "/var/run/docker/netns/03d6121fa746",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-120615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:76:51:cf:bd:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3561ba231e6c48a625724c6039bb103aabf4482d7db78bad659da0b08d445469",
	                    "EndpointID": "94d026911af52030bc96754a63e0334f51dcbb249930773e615cdc9fb74f4e43",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-120615",
	                        "dd9cd12a762d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (339.302425ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25: (1.778425983s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	│ stop    │ -p no-preload-970975 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ addons  │ enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ start   │ -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-120615 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-120615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │                     │
	│ image   │ newest-cni-120615 image list --format=json                                                                                                                                                                                                               │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:53 UTC │ 18 Dec 25 01:53 UTC │
	│ pause   │ -p newest-cni-120615 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:53 UTC │ 18 Dec 25 01:53 UTC │
	│ unpause │ -p newest-cni-120615 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:53 UTC │ 18 Dec 25 01:53 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:47:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:47:25.355718 1550381 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:47:25.355915 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.355941 1550381 out.go:374] Setting ErrFile to fd 2...
	I1218 01:47:25.355960 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.356345 1550381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:47:25.356861 1550381 out.go:368] Setting JSON to false
	I1218 01:47:25.358213 1550381 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30592,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:47:25.358285 1550381 start.go:143] virtualization:  
	I1218 01:47:25.361184 1550381 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:47:25.364947 1550381 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:47:25.365006 1550381 notify.go:221] Checking for updates...
	I1218 01:47:25.370797 1550381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:47:25.373705 1550381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:25.376399 1550381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:47:25.379145 1550381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:47:25.381925 1550381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1218 01:47:23.895415 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:47:25.400717 1542458 node_ready.go:38] duration metric: took 6m0.00576723s for node "no-preload-970975" to be "Ready" ...
	I1218 01:47:25.403890 1542458 out.go:203] 
	W1218 01:47:25.406708 1542458 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 01:47:25.406730 1542458 out.go:285] * 
	W1218 01:47:25.413144 1542458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:47:25.416224 1542458 out.go:203] 
	I1218 01:47:25.385246 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:25.385825 1550381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:47:25.416975 1550381 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:47:25.417132 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.547941 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.531353346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.548100 1550381 docker.go:319] overlay module found
	I1218 01:47:25.551414 1550381 out.go:179] * Using the docker driver based on existing profile
	I1218 01:47:25.554261 1550381 start.go:309] selected driver: docker
	I1218 01:47:25.554288 1550381 start.go:927] validating driver "docker" against &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.554406 1550381 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:47:25.555118 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.640875 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.630200713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.641222 1550381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:47:25.641258 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:25.641307 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:25.641353 1550381 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.647668 1550381 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:47:25.650778 1550381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:47:25.654776 1550381 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:47:25.657861 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:25.657921 1550381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:47:25.657930 1550381 cache.go:65] Caching tarball of preloaded images
	I1218 01:47:25.658010 1550381 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:47:25.658022 1550381 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:47:25.658128 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:25.658345 1550381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:47:25.717764 1550381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:47:25.717789 1550381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:47:25.717804 1550381 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:47:25.717832 1550381 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:47:25.717885 1550381 start.go:364] duration metric: took 36.159µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:47:25.717905 1550381 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:47:25.717910 1550381 fix.go:54] fixHost starting: 
	I1218 01:47:25.718174 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:25.745308 1550381 fix.go:112] recreateIfNeeded on newest-cni-120615: state=Stopped err=<nil>
	W1218 01:47:25.745341 1550381 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:47:25.748580 1550381 out.go:252] * Restarting existing docker container for "newest-cni-120615" ...
	I1218 01:47:25.748689 1550381 cli_runner.go:164] Run: docker start newest-cni-120615
	I1218 01:47:26.093744 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:26.142570 1550381 kic.go:430] container "newest-cni-120615" state is running.
	I1218 01:47:26.143025 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:26.185359 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:26.185574 1550381 machine.go:94] provisionDockerMachine start ...
	I1218 01:47:26.185645 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:26.213286 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:26.213626 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:26.213647 1550381 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:47:26.214251 1550381 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51806->127.0.0.1:34217: read: connection reset by peer
	I1218 01:47:29.372266 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.372355 1550381 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:47:29.372452 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.391771 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.392072 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.392083 1550381 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:47:29.561538 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.561625 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.579579 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.579890 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.579907 1550381 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:47:29.737159 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:47:29.737184 1550381 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:47:29.737219 1550381 ubuntu.go:190] setting up certificates
	I1218 01:47:29.737230 1550381 provision.go:84] configureAuth start
	I1218 01:47:29.737295 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:29.756140 1550381 provision.go:143] copyHostCerts
	I1218 01:47:29.756217 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:47:29.756227 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:47:29.756310 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:47:29.756403 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:47:29.756408 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:47:29.756436 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:47:29.756487 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:47:29.756491 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:47:29.756514 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:47:29.756559 1550381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:47:30.464419 1550381 provision.go:177] copyRemoteCerts
	I1218 01:47:30.464487 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:47:30.464527 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.482395 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.589769 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:47:30.608046 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:47:30.627105 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:47:30.645433 1550381 provision.go:87] duration metric: took 908.179647ms to configureAuth
	I1218 01:47:30.645503 1550381 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:47:30.645738 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:30.645753 1550381 machine.go:97] duration metric: took 4.460171667s to provisionDockerMachine
	I1218 01:47:30.645761 1550381 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:47:30.645773 1550381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:47:30.645828 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:47:30.645876 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.663527 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.774279 1550381 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:47:30.777807 1550381 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:47:30.777838 1550381 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:47:30.777851 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:47:30.777919 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:47:30.778044 1550381 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:47:30.778177 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:47:30.786077 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:30.804331 1550381 start.go:296] duration metric: took 158.553882ms for postStartSetup
	I1218 01:47:30.804411 1550381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:47:30.804450 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.822410 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.925924 1550381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:47:30.931214 1550381 fix.go:56] duration metric: took 5.213296131s for fixHost
	I1218 01:47:30.931236 1550381 start.go:83] releasing machines lock for "newest-cni-120615", held for 5.213342998s
	I1218 01:47:30.931301 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:30.952534 1550381 ssh_runner.go:195] Run: cat /version.json
	I1218 01:47:30.952560 1550381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:47:30.952584 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.952698 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.969636 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.973480 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:31.167774 1550381 ssh_runner.go:195] Run: systemctl --version
	I1218 01:47:31.174874 1550381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:47:31.179507 1550381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:47:31.179587 1550381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:47:31.187709 1550381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:47:31.187739 1550381 start.go:496] detecting cgroup driver to use...
	I1218 01:47:31.187790 1550381 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:47:31.187842 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:47:31.205437 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:47:31.218917 1550381 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:47:31.218989 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:47:31.234859 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:47:31.247863 1550381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:47:31.361666 1550381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:47:31.478401 1550381 docker.go:234] disabling docker service ...
	I1218 01:47:31.478516 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:47:31.493181 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:47:31.506484 1550381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:47:31.622932 1550381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:47:31.755398 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:47:31.768148 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:47:31.786320 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:47:31.795518 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:47:31.804506 1550381 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:47:31.804591 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:47:31.814205 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.823037 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:47:31.832187 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.841421 1550381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:47:31.849663 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:47:31.858543 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:47:31.867324 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:47:31.878120 1550381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:47:31.886565 1550381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:47:31.894226 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.000205 1550381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:47:32.119373 1550381 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:47:32.119494 1550381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:47:32.123705 1550381 start.go:564] Will wait 60s for crictl version
	I1218 01:47:32.123796 1550381 ssh_runner.go:195] Run: which crictl
	I1218 01:47:32.127736 1550381 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:47:32.151646 1550381 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:47:32.151742 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.171630 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.197786 1550381 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:47:32.200756 1550381 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:47:32.216905 1550381 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:47:32.220989 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.234255 1550381 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:47:32.237186 1550381 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:47:32.237352 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:32.237431 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.266567 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.266592 1550381 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:47:32.266653 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.290056 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.290080 1550381 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:47:32.290087 1550381 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:47:32.290202 1550381 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:47:32.290272 1550381 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:47:32.317281 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:32.317305 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:32.317328 1550381 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:47:32.317382 1550381 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:47:32.317534 1550381 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:47:32.317611 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:47:32.325240 1550381 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:47:32.325360 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:47:32.332953 1550381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:47:32.345753 1550381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:47:32.358201 1550381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:47:32.371135 1550381 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:47:32.374910 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.385004 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.524322 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:32.543517 1550381 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:47:32.543581 1550381 certs.go:195] generating shared ca certs ...
	I1218 01:47:32.543620 1550381 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:32.543768 1550381 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:47:32.543847 1550381 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:47:32.543878 1550381 certs.go:257] generating profile certs ...
	I1218 01:47:32.544012 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:47:32.544110 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:47:32.544194 1550381 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:47:32.544363 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:47:32.544429 1550381 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:47:32.544454 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:47:32.544506 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:47:32.544561 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:47:32.544639 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:47:32.544713 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:32.545379 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:47:32.570494 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:47:32.589292 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:47:32.607511 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:47:32.630085 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:47:32.648120 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:47:32.665293 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:47:32.683115 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:47:32.701108 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:47:32.719384 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:47:32.737332 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:47:32.755228 1550381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:47:32.768547 1550381 ssh_runner.go:195] Run: openssl version
	I1218 01:47:32.775214 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.783201 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:47:32.791100 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794909 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794975 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.836868 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:47:32.844649 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.852089 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:47:32.859827 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863774 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863845 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.904999 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:47:32.912518 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.919928 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:47:32.927254 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.930966 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.931034 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.972378 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:47:32.979895 1550381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:47:32.983509 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:47:33.024763 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:47:33.066928 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:47:33.108240 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:47:33.150820 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:47:33.193721 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:47:33.236344 1550381 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:33.236435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:47:33.236534 1550381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:47:33.262713 1550381 cri.go:89] found id: ""
	I1218 01:47:33.262784 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:47:33.270865 1550381 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:47:33.270885 1550381 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:47:33.270962 1550381 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:47:33.278569 1550381 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:47:33.279133 1550381 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.279389 1550381 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-120615" cluster setting kubeconfig missing "newest-cni-120615" context setting]
	I1218 01:47:33.279869 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.281782 1550381 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:47:33.289414 1550381 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1218 01:47:33.289446 1550381 kubeadm.go:602] duration metric: took 18.555667ms to restartPrimaryControlPlane
	I1218 01:47:33.289461 1550381 kubeadm.go:403] duration metric: took 53.123465ms to StartCluster
	I1218 01:47:33.289476 1550381 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.289537 1550381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.290381 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.290591 1550381 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:47:33.290894 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:33.290942 1550381 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:47:33.291049 1550381 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-120615"
	I1218 01:47:33.291069 1550381 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-120615"
	I1218 01:47:33.291087 1550381 addons.go:70] Setting dashboard=true in profile "newest-cni-120615"
	I1218 01:47:33.291142 1550381 addons.go:239] Setting addon dashboard=true in "newest-cni-120615"
	W1218 01:47:33.291166 1550381 addons.go:248] addon dashboard should already be in state true
	I1218 01:47:33.291217 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291092 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291788 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291956 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291099 1550381 addons.go:70] Setting default-storageclass=true in profile "newest-cni-120615"
	I1218 01:47:33.292357 1550381 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-120615"
	I1218 01:47:33.292683 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.296441 1550381 out.go:179] * Verifying Kubernetes components...
	I1218 01:47:33.299325 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:33.332793 1550381 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:47:33.338698 1550381 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.338720 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:47:33.338786 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.346302 1550381 addons.go:239] Setting addon default-storageclass=true in "newest-cni-120615"
	I1218 01:47:33.346350 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.346767 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.347220 1550381 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:47:33.357584 1550381 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:47:33.364736 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:47:33.364766 1550381 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:47:33.364841 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.384388 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.388779 1550381 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.388806 1550381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:47:33.388870 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.420777 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.424445 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.506937 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:33.590614 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.623167 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.644036 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:47:33.644058 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:47:33.686194 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:47:33.686219 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:47:33.699257 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:47:33.699284 1550381 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:47:33.712575 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:47:33.712598 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:47:33.726008 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:47:33.726036 1550381 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:47:33.739578 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:47:33.739601 1550381 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:47:33.752283 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:47:33.752306 1550381 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:47:33.765197 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:47:33.765228 1550381 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:47:33.778397 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:33.778463 1550381 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:47:33.791499 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:34.144394 1550381 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:47:34.144937 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:34.144564 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145084 1550381 retry.go:31] will retry after 226.399987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144607 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145242 1550381 retry.go:31] will retry after 194.583533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144818 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145308 1550381 retry.go:31] will retry after 316.325527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.341084 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:34.371646 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:34.416769 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.416804 1550381 retry.go:31] will retry after 482.49716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.445473 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.445504 1550381 retry.go:31] will retry after 401.349435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.462702 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:34.529683 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.529767 1550381 retry.go:31] will retry after 466.9672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:34.847135 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:34.899725 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:34.915787 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.915821 1550381 retry.go:31] will retry after 680.448009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.980399 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.980428 1550381 retry.go:31] will retry after 371.155762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.997728 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:35.075146 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.075188 1550381 retry.go:31] will retry after 528.393444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.145511 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:35.352321 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:35.422768 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.422808 1550381 retry.go:31] will retry after 703.678182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.597254 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:35.604769 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:35.645316 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:35.700025 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.700065 1550381 retry.go:31] will retry after 524.167729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:35.720166 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.720199 1550381 retry.go:31] will retry after 843.445988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.127505 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:36.145942 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:36.218437 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.218469 1550381 retry.go:31] will retry after 1.4365249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.224772 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:36.288029 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.288065 1550381 retry.go:31] will retry after 1.092662167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.564433 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:36.628283 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.628318 1550381 retry.go:31] will retry after 821.063441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.645614 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.145021 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.381704 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:37.442129 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.442163 1550381 retry.go:31] will retry after 1.066797005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.450315 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:37.513152 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.513188 1550381 retry.go:31] will retry after 2.094232702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.645565 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.656033 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:37.728287 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.728341 1550381 retry.go:31] will retry after 2.192570718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.145856 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:38.509851 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:38.574127 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.574163 1550381 retry.go:31] will retry after 2.056176901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.645562 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.145843 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.608414 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:39.645902 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:39.677401 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.677446 1550381 retry.go:31] will retry after 2.219986296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.921684 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:39.986039 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.986071 1550381 retry.go:31] will retry after 1.874712757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.145336 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:40.630985 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:40.645468 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:40.721503 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.721589 1550381 retry.go:31] will retry after 5.659633915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.145050 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.861275 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:41.897736 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:41.919445 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.919480 1550381 retry.go:31] will retry after 5.257989291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:41.968013 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.968047 1550381 retry.go:31] will retry after 2.407225539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:42.145507 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:42.645709 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.145827 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.645206 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.145140 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.375521 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:44.445301 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.445333 1550381 retry.go:31] will retry after 6.049252935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.145091 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.646076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.145377 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.381920 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:46.446240 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.446272 1550381 retry.go:31] will retry after 6.470588043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.645629 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.145934 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.178013 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:47.241089 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.241122 1550381 retry.go:31] will retry after 8.808880621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.645680 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.145730 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.646057 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.145645 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.646010 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.145037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.495265 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:50.557628 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.557662 1550381 retry.go:31] will retry after 5.398438748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.645968 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.145305 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.645106 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.145818 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.645593 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.917095 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:53.016010 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.016044 1550381 retry.go:31] will retry after 7.672661981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.145281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:53.645853 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.145129 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.645151 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.145097 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.645490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.957008 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:56.023826 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.023863 1550381 retry.go:31] will retry after 8.13600998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.050917 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:56.116243 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.116276 1550381 retry.go:31] will retry after 5.600895051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.145475 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:56.645854 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.145640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.645927 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.145109 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.645621 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.145858 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.645893 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.145118 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.645093 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.689724 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:00.750450 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:00.750485 1550381 retry.go:31] will retry after 19.327903144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.145862 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.645460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.717566 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:01.782999 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.783030 1550381 retry.go:31] will retry after 18.603092159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:02.145671 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:02.645087 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.145743 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.645040 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.145864 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.161047 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:04.272335 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.272373 1550381 retry.go:31] will retry after 12.170847168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.645651 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.145079 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.645793 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.145198 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.145836 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.645773 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.145131 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.645630 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.145136 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.645143 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.145076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.645910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.146089 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.145142 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.645270 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.145485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.645137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.145724 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.645837 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.146110 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.645847 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.145895 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.444141 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:16.505161 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.505200 1550381 retry.go:31] will retry after 25.656674631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.645612 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.145123 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.645762 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.145134 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.145081 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.645152 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.079482 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:20.141746 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.141779 1550381 retry.go:31] will retry after 22.047786735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.145903 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.387205 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:20.452144 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.452188 1550381 retry.go:31] will retry after 24.810473247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.645470 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.146015 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.645174 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.145273 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.645128 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.145100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.145139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.646075 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.145371 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.645387 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.145943 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.645074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.145918 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.645060 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.145641 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.645873 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.146022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.145074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.645956 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.145849 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.645447 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.145809 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.645085 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.146067 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.645142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:33.645253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:33.669719 1550381 cri.go:89] found id: ""
	I1218 01:48:33.669745 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.669754 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:33.669760 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:33.669817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:33.695127 1550381 cri.go:89] found id: ""
	I1218 01:48:33.695150 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.695159 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:33.695164 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:33.695253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:33.719637 1550381 cri.go:89] found id: ""
	I1218 01:48:33.719659 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.719668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:33.719674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:33.719778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:33.746705 1550381 cri.go:89] found id: ""
	I1218 01:48:33.746731 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.746740 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:33.746746 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:33.746805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:33.774595 1550381 cri.go:89] found id: ""
	I1218 01:48:33.774620 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.774631 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:33.774638 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:33.774696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:33.802090 1550381 cri.go:89] found id: ""
	I1218 01:48:33.802115 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.802123 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:33.802130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:33.802187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:33.827047 1550381 cri.go:89] found id: ""
	I1218 01:48:33.827084 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.827094 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:33.827100 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:33.827172 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:33.855186 1550381 cri.go:89] found id: ""
	I1218 01:48:33.855213 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.855222 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:33.855230 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:33.855241 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:33.910490 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:33.910527 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:33.925321 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:33.925361 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:33.990602 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:33.990624 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:33.990636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:34.016861 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:34.016901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:36.546620 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:36.557304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:36.557390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:36.582868 1550381 cri.go:89] found id: ""
	I1218 01:48:36.582891 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.582900 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:36.582906 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:36.582964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:36.608045 1550381 cri.go:89] found id: ""
	I1218 01:48:36.608067 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.608075 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:36.608081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:36.608137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:36.633385 1550381 cri.go:89] found id: ""
	I1218 01:48:36.633408 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.633417 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:36.633423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:36.633482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:36.657140 1550381 cri.go:89] found id: ""
	I1218 01:48:36.657165 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.657175 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:36.657187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:36.657254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:36.686651 1550381 cri.go:89] found id: ""
	I1218 01:48:36.686673 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.686683 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:36.686689 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:36.686753 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:36.712049 1550381 cri.go:89] found id: ""
	I1218 01:48:36.712073 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.712082 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:36.712089 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:36.712146 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:36.736327 1550381 cri.go:89] found id: ""
	I1218 01:48:36.736355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.736369 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:36.736375 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:36.736432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:36.763059 1550381 cri.go:89] found id: ""
	I1218 01:48:36.763085 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.763094 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:36.763104 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:36.763115 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:36.818060 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:36.818095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:36.833161 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:36.833198 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:36.900981 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:36.901005 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:36.901018 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:36.926395 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:36.926435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:39.461526 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:39.472938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:39.473011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:39.499282 1550381 cri.go:89] found id: ""
	I1218 01:48:39.499309 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.499317 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:39.499324 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:39.499387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:39.524947 1550381 cri.go:89] found id: ""
	I1218 01:48:39.524983 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.524992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:39.524998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:39.525108 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:39.549919 1550381 cri.go:89] found id: ""
	I1218 01:48:39.549944 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.549953 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:39.549959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:39.550021 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:39.574351 1550381 cri.go:89] found id: ""
	I1218 01:48:39.574376 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.574391 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:39.574398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:39.574456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:39.598033 1550381 cri.go:89] found id: ""
	I1218 01:48:39.598054 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.598063 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:39.598069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:39.598133 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:39.626910 1550381 cri.go:89] found id: ""
	I1218 01:48:39.626932 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.626940 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:39.626946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:39.627002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:39.655231 1550381 cri.go:89] found id: ""
	I1218 01:48:39.655302 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.655326 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:39.655346 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:39.655426 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:39.684000 1550381 cri.go:89] found id: ""
	I1218 01:48:39.684079 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.684106 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:39.684129 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:39.684170 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:39.739075 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:39.739109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:39.753861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:39.753890 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:39.817313 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:39.817335 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:39.817347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:39.842685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:39.842727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:42.162239 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:48:42.190324 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:42.249384 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:48:42.249527 1550381 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:48:42.279196 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.279234 1550381 retry.go:31] will retry after 35.148907823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.371473 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:42.382637 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:42.382711 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:42.428461 1550381 cri.go:89] found id: ""
	I1218 01:48:42.428490 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.428499 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:42.428505 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:42.428565 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:42.464484 1550381 cri.go:89] found id: ""
	I1218 01:48:42.464511 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.464520 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:42.464526 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:42.464600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:42.501574 1550381 cri.go:89] found id: ""
	I1218 01:48:42.501644 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.501668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:42.501682 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:42.501756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:42.529255 1550381 cri.go:89] found id: ""
	I1218 01:48:42.529283 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.529292 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:42.529299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:42.529357 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:42.563020 1550381 cri.go:89] found id: ""
	I1218 01:48:42.563093 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.563130 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:42.563153 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:42.563240 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:42.589599 1550381 cri.go:89] found id: ""
	I1218 01:48:42.589672 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.589689 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:42.589697 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:42.589756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:42.620478 1550381 cri.go:89] found id: ""
	I1218 01:48:42.620500 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.620509 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:42.620515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:42.620600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:42.647535 1550381 cri.go:89] found id: ""
	I1218 01:48:42.647560 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.647574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:42.647583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:42.647594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:42.705328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:42.705366 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:42.720602 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:42.720653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:42.791434 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:42.791460 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:42.791474 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:42.816821 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:42.816855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:45.263722 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:48:45.345805 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:48:45.349241 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.349279 1550381 retry.go:31] will retry after 26.611542555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.357893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:45.358009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:45.383950 1550381 cri.go:89] found id: ""
	I1218 01:48:45.383977 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.383986 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:45.383993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:45.384055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:45.429969 1550381 cri.go:89] found id: ""
	I1218 01:48:45.429995 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.430004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:45.430010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:45.430071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:45.493689 1550381 cri.go:89] found id: ""
	I1218 01:48:45.493720 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.493730 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:45.493736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:45.493830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:45.520332 1550381 cri.go:89] found id: ""
	I1218 01:48:45.520355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.520363 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:45.520369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:45.520425 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:45.547181 1550381 cri.go:89] found id: ""
	I1218 01:48:45.547245 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.547270 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:45.547289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:45.547366 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:45.572686 1550381 cri.go:89] found id: ""
	I1218 01:48:45.572754 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.572780 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:45.572804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:45.572879 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:45.596710 1550381 cri.go:89] found id: ""
	I1218 01:48:45.596734 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.596743 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:45.596749 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:45.596809 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:45.622285 1550381 cri.go:89] found id: ""
	I1218 01:48:45.622316 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.622325 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:45.622335 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:45.622345 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:45.680819 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:45.680854 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:45.695825 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:45.695856 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:45.758598 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:45.758621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:45.758634 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:45.783476 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:45.783513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:48.311112 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:48.321845 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:48.321917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:48.347239 1550381 cri.go:89] found id: ""
	I1218 01:48:48.347260 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.347269 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:48.347276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:48.347352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:48.372522 1550381 cri.go:89] found id: ""
	I1218 01:48:48.372548 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.372557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:48.372564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:48.372641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:48.419361 1550381 cri.go:89] found id: ""
	I1218 01:48:48.419385 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.419402 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:48.419409 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:48.419476 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:48.468755 1550381 cri.go:89] found id: ""
	I1218 01:48:48.468780 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.468789 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:48.468795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:48.468865 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:48.499951 1550381 cri.go:89] found id: ""
	I1218 01:48:48.499978 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.499987 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:48.499993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:48.500066 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:48.525758 1550381 cri.go:89] found id: ""
	I1218 01:48:48.525784 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.525793 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:48.525799 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:48.525867 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:48.554959 1550381 cri.go:89] found id: ""
	I1218 01:48:48.554982 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.554991 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:48.554999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:48.555073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:48.579603 1550381 cri.go:89] found id: ""
	I1218 01:48:48.579627 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.579636 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:48.579646 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:48.579682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:48.638239 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:48.638284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:48.652698 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:48.652747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:48.719758 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:48.719781 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:48.719796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:48.744911 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:48.744946 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:51.273570 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:51.283902 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:51.283973 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:51.308033 1550381 cri.go:89] found id: ""
	I1218 01:48:51.308057 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.308065 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:51.308072 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:51.308135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:51.335581 1550381 cri.go:89] found id: ""
	I1218 01:48:51.335604 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.335612 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:51.335618 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:51.335676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:51.364109 1550381 cri.go:89] found id: ""
	I1218 01:48:51.364135 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.364144 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:51.364150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:51.364208 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:51.401663 1550381 cri.go:89] found id: ""
	I1218 01:48:51.401689 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.401698 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:51.401704 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:51.401764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:51.436653 1550381 cri.go:89] found id: ""
	I1218 01:48:51.436679 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.436688 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:51.436696 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:51.436755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:51.484873 1550381 cri.go:89] found id: ""
	I1218 01:48:51.484900 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.484908 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:51.484915 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:51.484972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:51.512364 1550381 cri.go:89] found id: ""
	I1218 01:48:51.512389 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.512398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:51.512404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:51.512463 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:51.536334 1550381 cri.go:89] found id: ""
	I1218 01:48:51.536359 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.536368 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:51.536378 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:51.536389 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:51.590814 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:51.590847 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:51.605410 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:51.605438 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:51.679184 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:51.679247 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:51.679267 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:51.704862 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:51.704898 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:54.232571 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:54.243250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:54.243318 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:54.268694 1550381 cri.go:89] found id: ""
	I1218 01:48:54.268762 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.268776 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:54.268783 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:54.268861 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:54.294766 1550381 cri.go:89] found id: ""
	I1218 01:48:54.294789 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.294798 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:54.294811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:54.294872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:54.319370 1550381 cri.go:89] found id: ""
	I1218 01:48:54.319396 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.319405 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:54.319411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:54.319470 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:54.344762 1550381 cri.go:89] found id: ""
	I1218 01:48:54.344805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.344815 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:54.344839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:54.344928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:54.376778 1550381 cri.go:89] found id: ""
	I1218 01:48:54.376805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.376823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:54.376830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:54.376948 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:54.435510 1550381 cri.go:89] found id: ""
	I1218 01:48:54.435589 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.435620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:54.435641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:54.435763 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:54.481350 1550381 cri.go:89] found id: ""
	I1218 01:48:54.481428 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.481456 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:54.481476 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:54.481621 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:54.520301 1550381 cri.go:89] found id: ""
	I1218 01:48:54.520377 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.520399 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:54.520420 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:54.520457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:54.578993 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:54.579045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:54.595845 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:54.595876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:54.661543 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:54.661566 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:54.661578 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:54.687751 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:54.687803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.222271 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:57.232723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:57.232795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:57.260837 1550381 cri.go:89] found id: ""
	I1218 01:48:57.260858 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.260866 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:57.260872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:57.260928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:57.286122 1550381 cri.go:89] found id: ""
	I1218 01:48:57.286148 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.286156 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:57.286163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:57.286220 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:57.310908 1550381 cri.go:89] found id: ""
	I1218 01:48:57.310930 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.310939 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:57.310945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:57.311005 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:57.336552 1550381 cri.go:89] found id: ""
	I1218 01:48:57.336573 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.336583 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:57.336589 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:57.336681 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:57.363069 1550381 cri.go:89] found id: ""
	I1218 01:48:57.363098 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.363106 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:57.363113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:57.363175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:57.387453 1550381 cri.go:89] found id: ""
	I1218 01:48:57.387483 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.387492 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:57.387499 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:57.387556 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:57.455540 1550381 cri.go:89] found id: ""
	I1218 01:48:57.455567 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.455576 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:57.455583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:57.455641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:57.487729 1550381 cri.go:89] found id: ""
	I1218 01:48:57.487751 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.487759 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:57.487773 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:57.487783 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:57.513517 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:57.513555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.541522 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:57.541591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:57.599250 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:57.599285 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:57.614575 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:57.614612 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:57.685065 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.185435 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:00.217821 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:00.217993 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:00.272675 1550381 cri.go:89] found id: ""
	I1218 01:49:00.272752 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.272781 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:00.272803 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:00.272911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:00.308098 1550381 cri.go:89] found id: ""
	I1218 01:49:00.308130 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.308140 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:00.308148 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:00.308229 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:00.342048 1550381 cri.go:89] found id: ""
	I1218 01:49:00.342083 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.342093 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:00.342102 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:00.342176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:00.373793 1550381 cri.go:89] found id: ""
	I1218 01:49:00.373867 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.373893 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:00.373912 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:00.374032 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:00.453457 1550381 cri.go:89] found id: ""
	I1218 01:49:00.453540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.453562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:00.453580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:00.453674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:00.497069 1550381 cri.go:89] found id: ""
	I1218 01:49:00.497139 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.497165 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:00.497229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:00.497320 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:00.523805 1550381 cri.go:89] found id: ""
	I1218 01:49:00.523883 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.523907 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:00.523925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:00.523998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:00.550245 1550381 cri.go:89] found id: ""
	I1218 01:49:00.550315 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.550338 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:00.550356 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:00.550368 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:00.606138 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:00.606171 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:00.621471 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:00.621501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:00.687608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.687630 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:00.687645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:00.713254 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:00.713288 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:03.251500 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:03.263863 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:03.263937 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:03.292341 1550381 cri.go:89] found id: ""
	I1218 01:49:03.292363 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.292372 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:03.292379 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:03.292444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:03.318593 1550381 cri.go:89] found id: ""
	I1218 01:49:03.318618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.318627 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:03.318633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:03.318713 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:03.342954 1550381 cri.go:89] found id: ""
	I1218 01:49:03.342976 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.342984 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:03.342990 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:03.343056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:03.369216 1550381 cri.go:89] found id: ""
	I1218 01:49:03.369240 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.369255 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:03.369262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:03.369321 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:03.418160 1550381 cri.go:89] found id: ""
	I1218 01:49:03.418196 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.418208 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:03.418234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:03.418314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:03.468056 1550381 cri.go:89] found id: ""
	I1218 01:49:03.468090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.468100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:03.468107 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:03.468177 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:03.493930 1550381 cri.go:89] found id: ""
	I1218 01:49:03.493954 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.493964 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:03.493970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:03.494028 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:03.522766 1550381 cri.go:89] found id: ""
	I1218 01:49:03.522799 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.522808 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:03.522817 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:03.522845 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:03.579881 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:03.579922 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:03.595497 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:03.595533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:03.664750 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:03.664774 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:03.664789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:03.690066 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:03.690102 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:06.220404 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:06.230940 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:06.231013 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:06.258449 1550381 cri.go:89] found id: ""
	I1218 01:49:06.258493 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.258501 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:06.258511 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:06.258570 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:06.284944 1550381 cri.go:89] found id: ""
	I1218 01:49:06.284967 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.284975 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:06.284981 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:06.285038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:06.310888 1550381 cri.go:89] found id: ""
	I1218 01:49:06.310914 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.310923 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:06.310929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:06.310992 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:06.336281 1550381 cri.go:89] found id: ""
	I1218 01:49:06.336306 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.336316 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:06.336322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:06.336384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:06.361424 1550381 cri.go:89] found id: ""
	I1218 01:49:06.361489 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.361507 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:06.361515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:06.361581 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:06.386353 1550381 cri.go:89] found id: ""
	I1218 01:49:06.386381 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.386390 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:06.386396 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:06.386458 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:06.420497 1550381 cri.go:89] found id: ""
	I1218 01:49:06.420523 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.420533 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:06.420540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:06.420599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:06.477983 1550381 cri.go:89] found id: ""
	I1218 01:49:06.478008 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.478017 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:06.478033 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:06.478045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:06.542941 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:06.542988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:06.557943 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:06.557971 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:06.638974 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:06.638996 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:06.639008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:06.665193 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:06.665231 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.197687 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:09.208321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:09.208432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:09.233962 1550381 cri.go:89] found id: ""
	I1218 01:49:09.233985 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.233993 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:09.234000 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:09.234061 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:09.262673 1550381 cri.go:89] found id: ""
	I1218 01:49:09.262697 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.262706 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:09.262712 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:09.262773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:09.289951 1550381 cri.go:89] found id: ""
	I1218 01:49:09.289973 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.289982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:09.289988 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:09.290053 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:09.314541 1550381 cri.go:89] found id: ""
	I1218 01:49:09.314570 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.314578 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:09.314585 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:09.314650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:09.343459 1550381 cri.go:89] found id: ""
	I1218 01:49:09.343484 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.343493 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:09.343500 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:09.343563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:09.376389 1550381 cri.go:89] found id: ""
	I1218 01:49:09.376413 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.376422 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:09.376429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:09.376488 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:09.436490 1550381 cri.go:89] found id: ""
	I1218 01:49:09.436567 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.436591 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:09.436611 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:09.436730 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:09.486769 1550381 cri.go:89] found id: ""
	I1218 01:49:09.486798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.486807 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:09.486817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:09.486827 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:09.512058 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:09.512099 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.540109 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:09.540137 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:09.595196 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:09.595233 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:09.610057 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:09.610088 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:09.676821 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:11.961101 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:49:12.022946 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:12.023052 1550381 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:12.177224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:12.188868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:12.188946 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:12.214139 1550381 cri.go:89] found id: ""
	I1218 01:49:12.214162 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.214171 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:12.214178 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:12.214264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:12.242355 1550381 cri.go:89] found id: ""
	I1218 01:49:12.242380 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.242389 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:12.242395 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:12.242483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:12.266515 1550381 cri.go:89] found id: ""
	I1218 01:49:12.266540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.266548 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:12.266555 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:12.266613 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:12.290463 1550381 cri.go:89] found id: ""
	I1218 01:49:12.290529 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.290545 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:12.290553 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:12.290618 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:12.318223 1550381 cri.go:89] found id: ""
	I1218 01:49:12.318247 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.318256 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:12.318262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:12.318337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:12.342197 1550381 cri.go:89] found id: ""
	I1218 01:49:12.342222 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.342231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:12.342238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:12.342302 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:12.370588 1550381 cri.go:89] found id: ""
	I1218 01:49:12.370611 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.370620 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:12.370626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:12.370688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:12.418224 1550381 cri.go:89] found id: ""
	I1218 01:49:12.418249 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.418258 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:12.418268 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:12.418279 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:12.523068 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:12.523095 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:12.523108 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:12.549040 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:12.549076 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:12.577176 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:12.577201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:12.631665 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:12.631703 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.147547 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:15.158736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:15.158812 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:15.184772 1550381 cri.go:89] found id: ""
	I1218 01:49:15.184838 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.184862 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:15.184881 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:15.184962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:15.210609 1550381 cri.go:89] found id: ""
	I1218 01:49:15.210632 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.210641 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:15.210648 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:15.210712 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:15.238686 1550381 cri.go:89] found id: ""
	I1218 01:49:15.238722 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.238734 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:15.238741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:15.238815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:15.264618 1550381 cri.go:89] found id: ""
	I1218 01:49:15.264675 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.264684 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:15.264692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:15.264757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:15.295205 1550381 cri.go:89] found id: ""
	I1218 01:49:15.295229 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.295244 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:15.295250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:15.295319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:15.320375 1550381 cri.go:89] found id: ""
	I1218 01:49:15.320398 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.320406 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:15.320412 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:15.320472 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:15.345880 1550381 cri.go:89] found id: ""
	I1218 01:49:15.345912 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.345921 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:15.345928 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:15.345989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:15.371477 1550381 cri.go:89] found id: ""
	I1218 01:49:15.371499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.371508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:15.371518 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:15.371530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:15.432289 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:15.432325 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:15.513081 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:15.513118 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.528085 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:15.528163 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:15.589922 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:15.589943 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:15.589955 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:17.429823 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:49:17.494063 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:17.494186 1550381 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:17.497997 1550381 out.go:179] * Enabled addons: 
	I1218 01:49:17.500791 1550381 addons.go:530] duration metric: took 1m44.209848117s for enable addons: enabled=[]
	I1218 01:49:18.115485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:18.126625 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:18.126750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:18.152997 1550381 cri.go:89] found id: ""
	I1218 01:49:18.153031 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.153041 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:18.153048 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:18.153114 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:18.184726 1550381 cri.go:89] found id: ""
	I1218 01:49:18.184748 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.184757 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:18.184764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:18.184833 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:18.213873 1550381 cri.go:89] found id: ""
	I1218 01:49:18.213945 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.213971 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:18.213991 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:18.214081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:18.243010 1550381 cri.go:89] found id: ""
	I1218 01:49:18.243086 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.243109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:18.243128 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:18.243218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:18.267052 1550381 cri.go:89] found id: ""
	I1218 01:49:18.267117 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.267142 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:18.267158 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:18.267246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:18.291939 1550381 cri.go:89] found id: ""
	I1218 01:49:18.292002 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.292026 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:18.292045 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:18.292129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:18.318195 1550381 cri.go:89] found id: ""
	I1218 01:49:18.318219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.318233 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:18.318240 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:18.318299 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:18.346276 1550381 cri.go:89] found id: ""
	I1218 01:49:18.346310 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.346319 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:18.346329 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:18.346341 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:18.407199 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:18.407257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:18.440997 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:18.441077 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:18.537719 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:18.537789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:18.537810 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:18.563514 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:18.563550 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:21.091361 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:21.102189 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:21.102289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:21.130931 1550381 cri.go:89] found id: ""
	I1218 01:49:21.130958 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.130967 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:21.130974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:21.131033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:21.155877 1550381 cri.go:89] found id: ""
	I1218 01:49:21.155951 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.155984 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:21.156004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:21.156088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:21.180785 1550381 cri.go:89] found id: ""
	I1218 01:49:21.180809 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.180818 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:21.180824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:21.180908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:21.206344 1550381 cri.go:89] found id: ""
	I1218 01:49:21.206366 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.206375 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:21.206381 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:21.206441 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:21.230752 1550381 cri.go:89] found id: ""
	I1218 01:49:21.230775 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.230783 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:21.230789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:21.230846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:21.255317 1550381 cri.go:89] found id: ""
	I1218 01:49:21.255391 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.255416 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:21.255436 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:21.255520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:21.284319 1550381 cri.go:89] found id: ""
	I1218 01:49:21.284345 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.284355 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:21.284361 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:21.284420 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:21.313090 1550381 cri.go:89] found id: ""
	I1218 01:49:21.313116 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.313124 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:21.313133 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:21.313143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:21.367961 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:21.367997 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:21.382941 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:21.382972 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:21.496229 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:21.496249 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:21.496261 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:21.526182 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:21.526216 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:24.057294 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:24.070220 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:24.070292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:24.104394 1550381 cri.go:89] found id: ""
	I1218 01:49:24.104419 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.104428 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:24.104434 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:24.104495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:24.129335 1550381 cri.go:89] found id: ""
	I1218 01:49:24.129358 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.129366 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:24.129371 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:24.129429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:24.153339 1550381 cri.go:89] found id: ""
	I1218 01:49:24.153361 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.153370 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:24.153376 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:24.153439 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:24.178645 1550381 cri.go:89] found id: ""
	I1218 01:49:24.178669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.178677 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:24.178684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:24.178742 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:24.202721 1550381 cri.go:89] found id: ""
	I1218 01:49:24.202744 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.202753 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:24.202765 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:24.202827 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:24.228231 1550381 cri.go:89] found id: ""
	I1218 01:49:24.228255 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.228264 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:24.228271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:24.228334 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:24.252564 1550381 cri.go:89] found id: ""
	I1218 01:49:24.252585 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.252593 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:24.252599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:24.252682 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:24.282899 1550381 cri.go:89] found id: ""
	I1218 01:49:24.282975 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.283000 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:24.283015 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:24.283027 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:24.340471 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:24.340506 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:24.355477 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:24.355511 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:24.448676 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:24.448701 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:24.448720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:24.484800 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:24.484875 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:27.016359 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:27.027204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:27.027276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:27.054358 1550381 cri.go:89] found id: ""
	I1218 01:49:27.054383 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.054392 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:27.054398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:27.054456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:27.079191 1550381 cri.go:89] found id: ""
	I1218 01:49:27.079219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.079228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:27.079234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:27.079297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:27.104834 1550381 cri.go:89] found id: ""
	I1218 01:49:27.104856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.104865 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:27.104871 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:27.104943 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:27.134064 1550381 cri.go:89] found id: ""
	I1218 01:49:27.134138 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.134154 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:27.134161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:27.134227 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:27.159891 1550381 cri.go:89] found id: ""
	I1218 01:49:27.159915 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.159925 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:27.159931 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:27.159990 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:27.186008 1550381 cri.go:89] found id: ""
	I1218 01:49:27.186035 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.186044 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:27.186050 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:27.186135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:27.211311 1550381 cri.go:89] found id: ""
	I1218 01:49:27.211337 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.211346 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:27.211352 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:27.211433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:27.236397 1550381 cri.go:89] found id: ""
	I1218 01:49:27.236431 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.236440 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:27.236450 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:27.236461 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:27.293966 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:27.294001 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:27.309317 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:27.309355 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:27.380717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:27.380737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:27.380749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:27.410136 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:27.410175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:29.955798 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:29.968674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:29.968788 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:29.996170 1550381 cri.go:89] found id: ""
	I1218 01:49:29.996197 1550381 logs.go:282] 0 containers: []
	W1218 01:49:29.996208 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:29.996214 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:29.996276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:30.036959 1550381 cri.go:89] found id: ""
	I1218 01:49:30.036983 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.036992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:30.036999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:30.037067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:30.069036 1550381 cri.go:89] found id: ""
	I1218 01:49:30.069065 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.069076 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:30.069092 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:30.069231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:30.098534 1550381 cri.go:89] found id: ""
	I1218 01:49:30.098559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.098568 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:30.098575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:30.098637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:30.127481 1550381 cri.go:89] found id: ""
	I1218 01:49:30.127506 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.127515 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:30.127521 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:30.127588 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:30.153748 1550381 cri.go:89] found id: ""
	I1218 01:49:30.153773 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.153782 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:30.153789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:30.153872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:30.178887 1550381 cri.go:89] found id: ""
	I1218 01:49:30.178913 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.178922 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:30.178929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:30.179010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:30.204533 1550381 cri.go:89] found id: ""
	I1218 01:49:30.204559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.204568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:30.204578 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:30.204589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:30.260146 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:30.260180 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:30.275037 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:30.275067 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:30.338959 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:30.338978 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:30.338990 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:30.364082 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:30.364116 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:32.906096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:32.916660 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:32.916731 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:32.940216 1550381 cri.go:89] found id: ""
	I1218 01:49:32.940238 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.940247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:32.940254 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:32.940314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:32.967934 1550381 cri.go:89] found id: ""
	I1218 01:49:32.967956 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.967963 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:32.967970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:32.968027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:32.991930 1550381 cri.go:89] found id: ""
	I1218 01:49:32.991952 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.991961 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:32.991968 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:32.992027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:33.018215 1550381 cri.go:89] found id: ""
	I1218 01:49:33.018280 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.018303 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:33.018322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:33.018416 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:33.046738 1550381 cri.go:89] found id: ""
	I1218 01:49:33.046783 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.046794 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:33.046801 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:33.046873 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:33.072642 1550381 cri.go:89] found id: ""
	I1218 01:49:33.072669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.072678 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:33.072684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:33.072743 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:33.097687 1550381 cri.go:89] found id: ""
	I1218 01:49:33.097713 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.097722 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:33.097729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:33.097980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:33.125010 1550381 cri.go:89] found id: ""
	I1218 01:49:33.125090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.125107 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:33.125118 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:33.125134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:33.139761 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:33.139795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:33.204966 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:33.204990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:33.205002 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:33.230884 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:33.230929 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:33.263709 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:33.263739 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:35.820022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:35.830483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:35.830552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:35.855134 1550381 cri.go:89] found id: ""
	I1218 01:49:35.855161 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.855170 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:35.855177 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:35.855239 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:35.881968 1550381 cri.go:89] found id: ""
	I1218 01:49:35.881997 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.882006 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:35.882013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:35.882074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:35.907456 1550381 cri.go:89] found id: ""
	I1218 01:49:35.907481 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.907490 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:35.907496 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:35.907555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:35.936819 1550381 cri.go:89] found id: ""
	I1218 01:49:35.936845 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.936854 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:35.936860 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:35.936939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:35.961081 1550381 cri.go:89] found id: ""
	I1218 01:49:35.961107 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.961116 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:35.961123 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:35.961187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:35.985065 1550381 cri.go:89] found id: ""
	I1218 01:49:35.985091 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.985100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:35.985106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:35.985189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:36.013869 1550381 cri.go:89] found id: ""
	I1218 01:49:36.013894 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.013903 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:36.013909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:36.013972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:36.039260 1550381 cri.go:89] found id: ""
	I1218 01:49:36.039283 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.039291 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:36.039300 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:36.039312 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:36.069571 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:36.069659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:36.126151 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:36.126186 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:36.141484 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:36.141514 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:36.209837 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:36.209870 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:36.209883 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:38.735237 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:38.746104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:38.746193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:38.772225 1550381 cri.go:89] found id: ""
	I1218 01:49:38.772252 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.772261 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:38.772268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:38.772330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:38.797393 1550381 cri.go:89] found id: ""
	I1218 01:49:38.797420 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.797429 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:38.797435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:38.797498 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:38.822824 1550381 cri.go:89] found id: ""
	I1218 01:49:38.822847 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.822859 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:38.822868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:38.822927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:38.847877 1550381 cri.go:89] found id: ""
	I1218 01:49:38.847910 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.847919 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:38.847925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:38.847985 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:38.874529 1550381 cri.go:89] found id: ""
	I1218 01:49:38.874555 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.874564 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:38.874570 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:38.874655 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:38.902339 1550381 cri.go:89] found id: ""
	I1218 01:49:38.902406 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.902429 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:38.902447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:38.902535 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:38.927712 1550381 cri.go:89] found id: ""
	I1218 01:49:38.927745 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.927754 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:38.927761 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:38.927830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:38.954870 1550381 cri.go:89] found id: ""
	I1218 01:49:38.954937 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.954964 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:38.954986 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:38.955069 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:39.010028 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:39.010080 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:39.025363 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:39.025392 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:39.091129 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:39.091201 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:39.091221 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:39.116775 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:39.116809 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.650913 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:41.662276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:41.662344 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:41.731218 1550381 cri.go:89] found id: ""
	I1218 01:49:41.731246 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.731255 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:41.731261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:41.731319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:41.756567 1550381 cri.go:89] found id: ""
	I1218 01:49:41.756665 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.756680 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:41.756686 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:41.756755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:41.785421 1550381 cri.go:89] found id: ""
	I1218 01:49:41.785449 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.785458 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:41.785464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:41.785522 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:41.810479 1550381 cri.go:89] found id: ""
	I1218 01:49:41.810501 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.810510 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:41.810524 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:41.810590 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:41.835839 1550381 cri.go:89] found id: ""
	I1218 01:49:41.835863 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.835872 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:41.835878 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:41.835940 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:41.864064 1550381 cri.go:89] found id: ""
	I1218 01:49:41.864092 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.864100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:41.864106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:41.864162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:41.889810 1550381 cri.go:89] found id: ""
	I1218 01:49:41.889880 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.889911 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:41.889924 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:41.889997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:41.913756 1550381 cri.go:89] found id: ""
	I1218 01:49:41.913824 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.913849 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:41.913871 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:41.913902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.943258 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:41.943283 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:41.998631 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:41.998673 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:42.016861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:42.016892 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:42.086550 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:42.086592 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:42.086609 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.616940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:44.627561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:44.627705 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:44.700300 1550381 cri.go:89] found id: ""
	I1218 01:49:44.700322 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.700331 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:44.700337 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:44.700396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:44.736586 1550381 cri.go:89] found id: ""
	I1218 01:49:44.736669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.736685 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:44.736693 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:44.736760 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:44.760996 1550381 cri.go:89] found id: ""
	I1218 01:49:44.761020 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.761029 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:44.761035 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:44.761102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:44.786601 1550381 cri.go:89] found id: ""
	I1218 01:49:44.786637 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.786646 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:44.786655 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:44.786723 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:44.812292 1550381 cri.go:89] found id: ""
	I1218 01:49:44.812314 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.812322 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:44.812329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:44.812415 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:44.838185 1550381 cri.go:89] found id: ""
	I1218 01:49:44.838219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.838229 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:44.838236 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:44.838298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:44.867060 1550381 cri.go:89] found id: ""
	I1218 01:49:44.867081 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.867089 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:44.867095 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:44.867151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:44.892070 1550381 cri.go:89] found id: ""
	I1218 01:49:44.892099 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.892108 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:44.892117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:44.892133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:44.906549 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:44.906575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:44.971842 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:44.971863 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:44.971877 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.997318 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:44.997352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:45.078604 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:45.078658 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.669132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:47.684661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:47.684728 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:47.724476 1550381 cri.go:89] found id: ""
	I1218 01:49:47.724498 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.724509 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:47.724515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:47.724576 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:47.758012 1550381 cri.go:89] found id: ""
	I1218 01:49:47.758036 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.758044 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:47.758051 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:47.758109 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:47.786154 1550381 cri.go:89] found id: ""
	I1218 01:49:47.786180 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.786189 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:47.786196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:47.786258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:47.810902 1550381 cri.go:89] found id: ""
	I1218 01:49:47.810928 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.810937 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:47.810944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:47.811003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:47.836006 1550381 cri.go:89] found id: ""
	I1218 01:49:47.836032 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.836040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:47.836049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:47.836119 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:47.861054 1550381 cri.go:89] found id: ""
	I1218 01:49:47.861078 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.861087 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:47.861094 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:47.861167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:47.889731 1550381 cri.go:89] found id: ""
	I1218 01:49:47.889756 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.889765 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:47.889772 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:47.889829 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:47.918028 1550381 cri.go:89] found id: ""
	I1218 01:49:47.918055 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.918064 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:47.918073 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:47.918090 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.972822 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:47.972860 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:47.987701 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:47.987730 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:48.055884 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:48.055906 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:48.055919 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:48.081983 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:48.082021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.614399 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:50.625532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:50.625607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:50.669636 1550381 cri.go:89] found id: ""
	I1218 01:49:50.669663 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.669672 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:50.669678 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:50.669737 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:50.731793 1550381 cri.go:89] found id: ""
	I1218 01:49:50.731820 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.731829 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:50.731835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:50.731903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:50.758384 1550381 cri.go:89] found id: ""
	I1218 01:49:50.758407 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.758416 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:50.758422 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:50.758481 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:50.783123 1550381 cri.go:89] found id: ""
	I1218 01:49:50.783148 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.783157 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:50.783163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:50.783224 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:50.807986 1550381 cri.go:89] found id: ""
	I1218 01:49:50.808010 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.808019 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:50.808026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:50.808084 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:50.833014 1550381 cri.go:89] found id: ""
	I1218 01:49:50.833037 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.833058 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:50.833066 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:50.833125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:50.857525 1550381 cri.go:89] found id: ""
	I1218 01:49:50.857551 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.857560 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:50.857567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:50.857631 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:50.882511 1550381 cri.go:89] found id: ""
	I1218 01:49:50.882535 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.882543 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:50.882552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:50.882565 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.916936 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:50.916963 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:50.972064 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:50.972098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:50.987003 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:50.987031 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:51.056796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:51.056817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:51.056829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:53.582769 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:53.594237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:53.594316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:53.619778 1550381 cri.go:89] found id: ""
	I1218 01:49:53.619800 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.619809 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:53.619815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:53.619877 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:53.677064 1550381 cri.go:89] found id: ""
	I1218 01:49:53.677087 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.677097 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:53.677103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:53.677179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:53.733772 1550381 cri.go:89] found id: ""
	I1218 01:49:53.733798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.733808 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:53.733815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:53.733876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:53.759569 1550381 cri.go:89] found id: ""
	I1218 01:49:53.759594 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.759603 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:53.759609 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:53.759667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:53.785969 1550381 cri.go:89] found id: ""
	I1218 01:49:53.785993 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.786002 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:53.786008 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:53.786072 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:53.810819 1550381 cri.go:89] found id: ""
	I1218 01:49:53.810843 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.810851 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:53.810858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:53.810923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:53.836207 1550381 cri.go:89] found id: ""
	I1218 01:49:53.836271 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.836295 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:53.836314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:53.836395 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:53.860468 1550381 cri.go:89] found id: ""
	I1218 01:49:53.860499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.860508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:53.860518 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:53.860537 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:53.917328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:53.917365 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:53.932367 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:53.932407 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:54.001703 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:54.001723 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:54.001737 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:54.030548 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:54.030584 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.561340 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:56.571927 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:56.571998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:56.595966 1550381 cri.go:89] found id: ""
	I1218 01:49:56.595996 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.596006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:56.596012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:56.596073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:56.620113 1550381 cri.go:89] found id: ""
	I1218 01:49:56.620136 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.620145 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:56.620151 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:56.620211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:56.655375 1550381 cri.go:89] found id: ""
	I1218 01:49:56.655401 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.655410 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:56.655417 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:56.655477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:56.711903 1550381 cri.go:89] found id: ""
	I1218 01:49:56.711931 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.711940 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:56.711946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:56.712007 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:56.748501 1550381 cri.go:89] found id: ""
	I1218 01:49:56.748527 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.748536 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:56.748542 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:56.748600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:56.774097 1550381 cri.go:89] found id: ""
	I1218 01:49:56.774121 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.774130 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:56.774137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:56.774196 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:56.802594 1550381 cri.go:89] found id: ""
	I1218 01:49:56.802618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.802627 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:56.802633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:56.802690 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:56.827592 1550381 cri.go:89] found id: ""
	I1218 01:49:56.827615 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.827623 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:56.827633 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:56.827645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:56.852403 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:56.852433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.880076 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:56.880109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:56.935675 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:56.935712 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:56.950522 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:56.950549 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:57.019412 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.521100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:59.531832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:59.531908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:59.557309 1550381 cri.go:89] found id: ""
	I1218 01:49:59.557333 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.557342 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:59.557349 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:59.557406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:59.581813 1550381 cri.go:89] found id: ""
	I1218 01:49:59.581889 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.581911 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:59.581919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:59.581978 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:59.605979 1550381 cri.go:89] found id: ""
	I1218 01:49:59.606003 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.606012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:59.606018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:59.606101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:59.631076 1550381 cri.go:89] found id: ""
	I1218 01:49:59.631101 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.631110 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:59.631117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:59.631210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:59.670164 1550381 cri.go:89] found id: ""
	I1218 01:49:59.670189 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.670198 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:59.670205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:59.670309 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:59.706830 1550381 cri.go:89] found id: ""
	I1218 01:49:59.706856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.706865 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:59.706872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:59.706953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:59.739787 1550381 cri.go:89] found id: ""
	I1218 01:49:59.739815 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.739824 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:59.739830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:59.739892 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:59.766523 1550381 cri.go:89] found id: ""
	I1218 01:49:59.766548 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.766558 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:59.766568 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:59.766579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:59.822153 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:59.822193 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:59.837991 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:59.838016 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:59.905967 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.905990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:59.906003 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:59.931368 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:59.931401 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:02.467452 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:02.478157 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:02.478230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:02.504286 1550381 cri.go:89] found id: ""
	I1218 01:50:02.504311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.504321 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:02.504328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:02.504390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:02.530207 1550381 cri.go:89] found id: ""
	I1218 01:50:02.530232 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.530242 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:02.530249 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:02.530308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:02.561278 1550381 cri.go:89] found id: ""
	I1218 01:50:02.561305 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.561314 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:02.561320 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:02.561383 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:02.586119 1550381 cri.go:89] found id: ""
	I1218 01:50:02.586144 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.586153 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:02.586159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:02.586218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:02.611212 1550381 cri.go:89] found id: ""
	I1218 01:50:02.611239 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.611249 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:02.611256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:02.611317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:02.638670 1550381 cri.go:89] found id: ""
	I1218 01:50:02.638697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.638705 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:02.638715 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:02.638819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:02.699868 1550381 cri.go:89] found id: ""
	I1218 01:50:02.699897 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.699906 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:02.699913 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:02.699971 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:02.753340 1550381 cri.go:89] found id: ""
	I1218 01:50:02.753371 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.753381 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:02.753391 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:02.753402 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:02.809735 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:02.809769 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:02.825241 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:02.825271 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:02.894096 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:02.894118 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:02.894130 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:02.919985 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:02.920021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:05.450883 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:05.461914 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:05.461989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:05.487197 1550381 cri.go:89] found id: ""
	I1218 01:50:05.487221 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.487230 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:05.487237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:05.487297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:05.513273 1550381 cri.go:89] found id: ""
	I1218 01:50:05.513304 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.513313 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:05.513321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:05.513385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:05.544168 1550381 cri.go:89] found id: ""
	I1218 01:50:05.544191 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.544200 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:05.544206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:05.544306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:05.570574 1550381 cri.go:89] found id: ""
	I1218 01:50:05.570597 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.570607 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:05.570613 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:05.570675 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:05.598812 1550381 cri.go:89] found id: ""
	I1218 01:50:05.598837 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.598845 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:05.598852 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:05.598915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:05.628314 1550381 cri.go:89] found id: ""
	I1218 01:50:05.628339 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.628348 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:05.628354 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:05.628418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:05.665714 1550381 cri.go:89] found id: ""
	I1218 01:50:05.665742 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.665751 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:05.665757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:05.665817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:05.733576 1550381 cri.go:89] found id: ""
	I1218 01:50:05.733603 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.733624 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:05.733634 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:05.733652 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:05.795404 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:05.795439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:05.811319 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:05.811347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:05.878494 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:05.878517 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:05.878532 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:05.904153 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:05.904185 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.433275 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:08.443880 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:08.443983 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:08.468382 1550381 cri.go:89] found id: ""
	I1218 01:50:08.468408 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.468417 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:08.468424 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:08.468483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:08.498576 1550381 cri.go:89] found id: ""
	I1218 01:50:08.498629 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.498656 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:08.498662 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:08.498764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:08.524767 1550381 cri.go:89] found id: ""
	I1218 01:50:08.524790 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.524799 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:08.524806 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:08.524868 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:08.551353 1550381 cri.go:89] found id: ""
	I1218 01:50:08.551380 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.551399 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:08.551406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:08.551482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:08.577687 1550381 cri.go:89] found id: ""
	I1218 01:50:08.577713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.577722 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:08.577729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:08.577816 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:08.603410 1550381 cri.go:89] found id: ""
	I1218 01:50:08.603434 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.603443 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:08.603450 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:08.603530 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:08.630799 1550381 cri.go:89] found id: ""
	I1218 01:50:08.630824 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.630833 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:08.630840 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:08.630903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:08.705200 1550381 cri.go:89] found id: ""
	I1218 01:50:08.705228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.705237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:08.705247 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:08.705260 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:08.733020 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:08.733047 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:08.798171 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:08.798195 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:08.798217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:08.823651 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:08.823682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.851693 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:08.851720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.407503 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:11.418083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:11.418157 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:11.443131 1550381 cri.go:89] found id: ""
	I1218 01:50:11.443153 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.443161 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:11.443167 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:11.443225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:11.468456 1550381 cri.go:89] found id: ""
	I1218 01:50:11.468480 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.468489 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:11.468495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:11.468559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:11.494875 1550381 cri.go:89] found id: ""
	I1218 01:50:11.494900 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.494910 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:11.494916 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:11.494976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:11.522672 1550381 cri.go:89] found id: ""
	I1218 01:50:11.522695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.522703 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:11.522710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:11.522774 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:11.550689 1550381 cri.go:89] found id: ""
	I1218 01:50:11.550713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.550723 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:11.550729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:11.550789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:11.579573 1550381 cri.go:89] found id: ""
	I1218 01:50:11.579600 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.579608 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:11.579615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:11.579677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:11.605240 1550381 cri.go:89] found id: ""
	I1218 01:50:11.605265 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.605274 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:11.605281 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:11.605348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:11.631171 1550381 cri.go:89] found id: ""
	I1218 01:50:11.631198 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.631208 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:11.631217 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:11.631228 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:11.709937 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:11.709969 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.779988 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:11.780023 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:11.795215 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:11.795243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:11.862143 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:11.862165 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:11.862177 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.389878 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:14.400681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:14.400756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:14.427103 1550381 cri.go:89] found id: ""
	I1218 01:50:14.427127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.427136 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:14.427142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:14.427200 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:14.455157 1550381 cri.go:89] found id: ""
	I1218 01:50:14.455180 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.455189 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:14.455195 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:14.455260 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:14.481712 1550381 cri.go:89] found id: ""
	I1218 01:50:14.481738 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.481752 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:14.481759 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:14.481821 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:14.506286 1550381 cri.go:89] found id: ""
	I1218 01:50:14.506312 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.506320 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:14.506327 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:14.506385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:14.531764 1550381 cri.go:89] found id: ""
	I1218 01:50:14.531789 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.531797 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:14.531804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:14.531864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:14.556792 1550381 cri.go:89] found id: ""
	I1218 01:50:14.556817 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.556826 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:14.556832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:14.556896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:14.581496 1550381 cri.go:89] found id: ""
	I1218 01:50:14.581521 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.581531 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:14.581537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:14.581603 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:14.605950 1550381 cri.go:89] found id: ""
	I1218 01:50:14.605973 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.605982 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:14.605992 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:14.606007 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.631804 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:14.631838 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:14.684967 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:14.685004 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:14.769991 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:14.770039 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:14.785356 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:14.785391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:14.851585 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.353376 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:17.364408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:17.364479 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:17.389035 1550381 cri.go:89] found id: ""
	I1218 01:50:17.389062 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.389071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:17.389077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:17.389141 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:17.418594 1550381 cri.go:89] found id: ""
	I1218 01:50:17.418620 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.418628 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:17.418634 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:17.418693 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:17.444908 1550381 cri.go:89] found id: ""
	I1218 01:50:17.444930 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.444938 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:17.444945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:17.445006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:17.470076 1550381 cri.go:89] found id: ""
	I1218 01:50:17.470100 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.470109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:17.470117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:17.470178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:17.494949 1550381 cri.go:89] found id: ""
	I1218 01:50:17.494972 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.494984 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:17.494992 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:17.495050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:17.523740 1550381 cri.go:89] found id: ""
	I1218 01:50:17.523767 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.523775 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:17.523782 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:17.523840 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:17.551184 1550381 cri.go:89] found id: ""
	I1218 01:50:17.551212 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.551220 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:17.551227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:17.551290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:17.576421 1550381 cri.go:89] found id: ""
	I1218 01:50:17.576446 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.576454 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:17.576464 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:17.576476 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:17.640879 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.640898 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:17.640911 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:17.719096 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:17.719184 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:17.749240 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:17.749266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:17.804542 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:17.804581 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.319731 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:20.329891 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:20.329962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:20.353449 1550381 cri.go:89] found id: ""
	I1218 01:50:20.353471 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.353479 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:20.353485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:20.353542 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:20.378067 1550381 cri.go:89] found id: ""
	I1218 01:50:20.378089 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.378098 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:20.378104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:20.378162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:20.403262 1550381 cri.go:89] found id: ""
	I1218 01:50:20.403288 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.403297 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:20.403304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:20.403362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:20.430817 1550381 cri.go:89] found id: ""
	I1218 01:50:20.430842 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.430851 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:20.430858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:20.430916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:20.456026 1550381 cri.go:89] found id: ""
	I1218 01:50:20.456049 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.456057 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:20.456064 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:20.456123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:20.485362 1550381 cri.go:89] found id: ""
	I1218 01:50:20.485388 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.485397 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:20.485404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:20.485461 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:20.509757 1550381 cri.go:89] found id: ""
	I1218 01:50:20.509779 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.509788 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:20.509794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:20.509851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:20.540098 1550381 cri.go:89] found id: ""
	I1218 01:50:20.540122 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.540130 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:20.540139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:20.540151 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:20.597234 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:20.597269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.611800 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:20.611826 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:20.741195 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:20.741222 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:20.741235 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:20.766650 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:20.766689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:23.295459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:23.306363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:23.306450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:23.331822 1550381 cri.go:89] found id: ""
	I1218 01:50:23.331848 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.331857 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:23.331864 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:23.331925 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:23.357194 1550381 cri.go:89] found id: ""
	I1218 01:50:23.357219 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.357228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:23.357234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:23.357293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:23.383201 1550381 cri.go:89] found id: ""
	I1218 01:50:23.383228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.383238 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:23.383245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:23.383306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:23.409593 1550381 cri.go:89] found id: ""
	I1218 01:50:23.409619 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.409628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:23.409636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:23.409694 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:23.434134 1550381 cri.go:89] found id: ""
	I1218 01:50:23.434157 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.434167 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:23.434173 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:23.434231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:23.458615 1550381 cri.go:89] found id: ""
	I1218 01:50:23.458637 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.458645 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:23.458652 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:23.458714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:23.483411 1550381 cri.go:89] found id: ""
	I1218 01:50:23.483433 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.483441 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:23.483447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:23.483505 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:23.510673 1550381 cri.go:89] found id: ""
	I1218 01:50:23.510697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.510707 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:23.510716 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:23.510727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:23.569129 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:23.569169 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:23.583622 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:23.583654 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:23.660608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:23.660646 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:23.660659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:23.689685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:23.689724 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:26.245910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:26.256314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:26.256387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:26.281224 1550381 cri.go:89] found id: ""
	I1218 01:50:26.281247 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.281257 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:26.281263 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:26.281331 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:26.310540 1550381 cri.go:89] found id: ""
	I1218 01:50:26.310567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.310576 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:26.310583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:26.310642 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:26.336372 1550381 cri.go:89] found id: ""
	I1218 01:50:26.336399 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.336407 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:26.336413 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:26.336473 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:26.362095 1550381 cri.go:89] found id: ""
	I1218 01:50:26.362120 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.362129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:26.362135 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:26.362199 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:26.387399 1550381 cri.go:89] found id: ""
	I1218 01:50:26.387424 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.387433 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:26.387439 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:26.387502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:26.412769 1550381 cri.go:89] found id: ""
	I1218 01:50:26.412794 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.412803 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:26.412809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:26.412878 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:26.437098 1550381 cri.go:89] found id: ""
	I1218 01:50:26.437124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.437132 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:26.437139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:26.437223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:26.462717 1550381 cri.go:89] found id: ""
	I1218 01:50:26.462744 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.462754 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:26.462764 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:26.462782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:26.521734 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:26.521768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:26.536748 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:26.536777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:26.603709 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:26.603730 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:26.603749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:26.632522 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:26.632599 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.191094 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:29.202310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:29.202386 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:29.227851 1550381 cri.go:89] found id: ""
	I1218 01:50:29.227878 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.227887 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:29.227893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:29.227960 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:29.257631 1550381 cri.go:89] found id: ""
	I1218 01:50:29.257656 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.257665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:29.257671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:29.257740 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:29.283590 1550381 cri.go:89] found id: ""
	I1218 01:50:29.283615 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.283625 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:29.283631 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:29.283696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:29.311410 1550381 cri.go:89] found id: ""
	I1218 01:50:29.311436 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.311445 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:29.311452 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:29.311517 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:29.342669 1550381 cri.go:89] found id: ""
	I1218 01:50:29.342695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.342714 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:29.342721 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:29.342815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:29.367296 1550381 cri.go:89] found id: ""
	I1218 01:50:29.367321 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.367330 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:29.367336 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:29.367396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:29.392236 1550381 cri.go:89] found id: ""
	I1218 01:50:29.392260 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.392269 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:29.392275 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:29.392336 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:29.417512 1550381 cri.go:89] found id: ""
	I1218 01:50:29.417538 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.417547 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:29.417556 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:29.417594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:29.488248 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:29.488272 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:29.488289 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:29.513850 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:29.513884 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.543041 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:29.543071 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:29.602048 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:29.602087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.117433 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:32.128498 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:32.128589 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:32.153547 1550381 cri.go:89] found id: ""
	I1218 01:50:32.153571 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.153580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:32.153587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:32.153647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:32.178431 1550381 cri.go:89] found id: ""
	I1218 01:50:32.178455 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.178464 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:32.178471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:32.178529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:32.203336 1550381 cri.go:89] found id: ""
	I1218 01:50:32.203362 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.203371 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:32.203377 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:32.203434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:32.230677 1550381 cri.go:89] found id: ""
	I1218 01:50:32.230702 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.230712 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:32.230718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:32.230800 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:32.255544 1550381 cri.go:89] found id: ""
	I1218 01:50:32.255567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.255576 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:32.255583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:32.255661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:32.282405 1550381 cri.go:89] found id: ""
	I1218 01:50:32.282468 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.282486 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:32.282493 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:32.282551 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:32.311100 1550381 cri.go:89] found id: ""
	I1218 01:50:32.311124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.311133 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:32.311139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:32.311195 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:32.339521 1550381 cri.go:89] found id: ""
	I1218 01:50:32.339550 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.339559 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:32.339568 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:32.339579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:32.364381 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:32.364417 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:32.396991 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:32.397017 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:32.453109 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:32.453144 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.468129 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:32.468158 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:32.534370 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.036282 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:35.048487 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:35.048567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:35.076340 1550381 cri.go:89] found id: ""
	I1218 01:50:35.076365 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.076373 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:35.076386 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:35.076451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:35.104187 1550381 cri.go:89] found id: ""
	I1218 01:50:35.104211 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.104221 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:35.104227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:35.104290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:35.131465 1550381 cri.go:89] found id: ""
	I1218 01:50:35.131536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.131563 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:35.131583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:35.131672 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:35.158198 1550381 cri.go:89] found id: ""
	I1218 01:50:35.158264 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.158281 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:35.158289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:35.158352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:35.185390 1550381 cri.go:89] found id: ""
	I1218 01:50:35.185462 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.185476 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:35.185483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:35.185555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:35.215800 1550381 cri.go:89] found id: ""
	I1218 01:50:35.215893 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.215919 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:35.215946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:35.216046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:35.243559 1550381 cri.go:89] found id: ""
	I1218 01:50:35.243627 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.243652 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:35.243671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:35.243748 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:35.272051 1550381 cri.go:89] found id: ""
	I1218 01:50:35.272079 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.272088 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:35.272099 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:35.272110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:35.328789 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:35.328829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:35.343746 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:35.343791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:35.410255 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.410278 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:35.410290 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:35.436151 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:35.436194 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:37.964765 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:37.975595 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:37.975668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:38.006140 1550381 cri.go:89] found id: ""
	I1218 01:50:38.006168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.006179 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:38.006186 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:38.006254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:38.032670 1550381 cri.go:89] found id: ""
	I1218 01:50:38.032696 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.032704 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:38.032711 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:38.032789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:38.058961 1550381 cri.go:89] found id: ""
	I1218 01:50:38.058991 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.059004 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:38.059013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:38.059086 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:38.093028 1550381 cri.go:89] found id: ""
	I1218 01:50:38.093053 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.093062 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:38.093069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:38.093130 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:38.118000 1550381 cri.go:89] found id: ""
	I1218 01:50:38.118024 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.118033 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:38.118040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:38.118099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:38.143582 1550381 cri.go:89] found id: ""
	I1218 01:50:38.143609 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.143620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:38.143627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:38.143687 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:38.170663 1550381 cri.go:89] found id: ""
	I1218 01:50:38.170692 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.170701 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:38.170707 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:38.170773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:38.195587 1550381 cri.go:89] found id: ""
	I1218 01:50:38.195610 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.195619 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:38.195629 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:38.195640 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:38.250718 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:38.250757 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:38.265740 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:38.265766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:38.332572 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:38.332602 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:38.332653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:38.358827 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:38.358864 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:40.892874 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:40.912835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:40.912911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:40.974270 1550381 cri.go:89] found id: ""
	I1218 01:50:40.974363 1550381 logs.go:282] 0 containers: []
	W1218 01:50:40.974391 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:40.974427 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:40.974538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:41.009749 1550381 cri.go:89] found id: ""
	I1218 01:50:41.009826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.009862 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:41.009893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:41.009999 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:41.036864 1550381 cri.go:89] found id: ""
	I1218 01:50:41.036933 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.036959 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:41.036974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:41.037050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:41.062681 1550381 cri.go:89] found id: ""
	I1218 01:50:41.062708 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.062717 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:41.062723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:41.062785 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:41.088510 1550381 cri.go:89] found id: ""
	I1218 01:50:41.088537 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.088562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:41.088569 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:41.088677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:41.113288 1550381 cri.go:89] found id: ""
	I1218 01:50:41.113311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.113321 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:41.113328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:41.113431 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:41.138413 1550381 cri.go:89] found id: ""
	I1218 01:50:41.138438 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.138447 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:41.138453 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:41.138510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:41.164559 1550381 cri.go:89] found id: ""
	I1218 01:50:41.164592 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.164601 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:41.164612 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:41.164655 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:41.220220 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:41.220257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:41.235147 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:41.235175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:41.301835 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:41.301860 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:41.301873 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:41.327289 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:41.327322 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:43.855149 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:43.865567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:43.865639 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:43.901178 1550381 cri.go:89] found id: ""
	I1218 01:50:43.901222 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.901231 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:43.901237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:43.901308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:43.975051 1550381 cri.go:89] found id: ""
	I1218 01:50:43.975085 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.975095 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:43.975103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:43.975175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:44.002012 1550381 cri.go:89] found id: ""
	I1218 01:50:44.002051 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.002062 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:44.002069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:44.002155 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:44.029977 1550381 cri.go:89] found id: ""
	I1218 01:50:44.030055 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.030090 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:44.030122 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:44.030212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:44.055154 1550381 cri.go:89] found id: ""
	I1218 01:50:44.055182 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.055199 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:44.055206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:44.055264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:44.080010 1550381 cri.go:89] found id: ""
	I1218 01:50:44.080081 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.080118 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:44.080142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:44.080234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:44.106566 1550381 cri.go:89] found id: ""
	I1218 01:50:44.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.106599 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:44.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:44.106685 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:44.130836 1550381 cri.go:89] found id: ""
	I1218 01:50:44.130864 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.130873 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:44.130883 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:44.130894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:44.185795 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:44.185833 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:44.200138 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:44.200164 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:44.265688 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:44.265760 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:44.265786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:44.290625 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:44.290662 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:46.817986 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:46.829340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:46.829433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:46.854080 1550381 cri.go:89] found id: ""
	I1218 01:50:46.854105 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.854113 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:46.854121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:46.854178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:46.894044 1550381 cri.go:89] found id: ""
	I1218 01:50:46.894069 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.894078 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:46.894084 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:46.894144 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:46.979469 1550381 cri.go:89] found id: ""
	I1218 01:50:46.979536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.979561 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:46.979580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:46.979670 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:47.007329 1550381 cri.go:89] found id: ""
	I1218 01:50:47.007393 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.007416 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:47.007435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:47.007524 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:47.036488 1550381 cri.go:89] found id: ""
	I1218 01:50:47.036515 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.036530 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:47.036537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:47.036600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:47.061288 1550381 cri.go:89] found id: ""
	I1218 01:50:47.061318 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.061327 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:47.061334 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:47.061394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:47.086889 1550381 cri.go:89] found id: ""
	I1218 01:50:47.086916 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.086925 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:47.086932 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:47.086995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:47.111795 1550381 cri.go:89] found id: ""
	I1218 01:50:47.111826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.111835 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:47.111844 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:47.111855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:47.166527 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:47.166560 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:47.184211 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:47.184238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:47.251953 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:47.251974 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:47.251986 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:47.277100 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:47.277134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:49.805362 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:49.816269 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:49.816341 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:49.843797 1550381 cri.go:89] found id: ""
	I1218 01:50:49.843820 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.843828 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:49.843834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:49.843894 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:49.869725 1550381 cri.go:89] found id: ""
	I1218 01:50:49.869751 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.869760 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:49.869766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:49.869826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:49.913079 1550381 cri.go:89] found id: ""
	I1218 01:50:49.913102 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.913110 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:49.913117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:49.913175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:49.978366 1550381 cri.go:89] found id: ""
	I1218 01:50:49.978456 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.978481 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:49.978506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:49.978669 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:50.015889 1550381 cri.go:89] found id: ""
	I1218 01:50:50.015961 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.015995 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:50.016015 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:50.016118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:50.043973 1550381 cri.go:89] found id: ""
	I1218 01:50:50.044008 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.044020 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:50.044028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:50.044097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:50.071368 1550381 cri.go:89] found id: ""
	I1218 01:50:50.071397 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.071407 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:50.071415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:50.071492 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:50.100352 1550381 cri.go:89] found id: ""
	I1218 01:50:50.100381 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.100392 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:50.100402 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:50.100414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:50.157120 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:50.157156 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:50.171935 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:50.171962 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:50.243754 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:50.243779 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:50.243792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:50.271841 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:50.271895 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:52.801073 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:52.811866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:52.811938 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:52.841370 1550381 cri.go:89] found id: ""
	I1218 01:50:52.841396 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.841404 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:52.841411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:52.841477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:52.866527 1550381 cri.go:89] found id: ""
	I1218 01:50:52.866549 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.866557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:52.866564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:52.866629 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:52.905295 1550381 cri.go:89] found id: ""
	I1218 01:50:52.905323 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.905333 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:52.905340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:52.905402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:52.976848 1550381 cri.go:89] found id: ""
	I1218 01:50:52.976871 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.976880 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:52.976886 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:52.976945 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:53.005921 1550381 cri.go:89] found id: ""
	I1218 01:50:53.005996 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.006013 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:53.006021 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:53.006096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:53.035172 1550381 cri.go:89] found id: ""
	I1218 01:50:53.035209 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.035219 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:53.035226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:53.035295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:53.062748 1550381 cri.go:89] found id: ""
	I1218 01:50:53.062816 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.062841 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:53.062856 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:53.062933 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:53.088160 1550381 cri.go:89] found id: ""
	I1218 01:50:53.088194 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.088203 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:53.088215 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:53.088227 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:53.143868 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:53.143906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:53.159169 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:53.159240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:53.226415 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:53.226438 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:53.226451 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:53.251410 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:53.251448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:55.783464 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:55.793844 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:55.793915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:55.822511 1550381 cri.go:89] found id: ""
	I1218 01:50:55.822543 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.822552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:55.822559 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:55.822630 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:55.852049 1550381 cri.go:89] found id: ""
	I1218 01:50:55.852076 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.852084 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:55.852090 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:55.852167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:55.877944 1550381 cri.go:89] found id: ""
	I1218 01:50:55.877974 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.877982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:55.877989 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:55.878045 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:55.964104 1550381 cri.go:89] found id: ""
	I1218 01:50:55.964127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.964136 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:55.964142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:55.964198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:55.989628 1550381 cri.go:89] found id: ""
	I1218 01:50:55.989658 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.989667 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:55.989681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:55.989752 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:56.024436 1550381 cri.go:89] found id: ""
	I1218 01:50:56.024465 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.024474 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:56.024480 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:56.024544 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:56.049953 1550381 cri.go:89] found id: ""
	I1218 01:50:56.050028 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.050045 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:56.050053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:56.050118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:56.075666 1550381 cri.go:89] found id: ""
	I1218 01:50:56.075711 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.075720 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:56.075729 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:56.075747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:56.141793 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:56.141818 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:56.141830 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:56.166981 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:56.167013 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:56.193749 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:56.193777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:56.248762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:56.248796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:58.763667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:58.773893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:58.773964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:58.801142 1550381 cri.go:89] found id: ""
	I1218 01:50:58.801168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.801177 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:58.801184 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:58.801255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:58.826909 1550381 cri.go:89] found id: ""
	I1218 01:50:58.826937 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.826946 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:58.826952 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:58.827011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:58.852298 1550381 cri.go:89] found id: ""
	I1218 01:50:58.852328 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.852337 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:58.852343 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:58.852402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:58.877078 1550381 cri.go:89] found id: ""
	I1218 01:50:58.877103 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.877112 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:58.877118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:58.877179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:58.908546 1550381 cri.go:89] found id: ""
	I1218 01:50:58.908572 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.908582 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:58.908588 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:58.908665 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:58.963294 1550381 cri.go:89] found id: ""
	I1218 01:50:58.963327 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.963336 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:58.963342 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:58.963408 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:59.004870 1550381 cri.go:89] found id: ""
	I1218 01:50:59.004907 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.004917 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:59.004923 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:59.004995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:59.030744 1550381 cri.go:89] found id: ""
	I1218 01:50:59.030812 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.030838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:59.030854 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:59.030866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:59.045546 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:59.045575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:59.112855 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:59.112876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:59.112888 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:59.137778 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:59.137857 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:59.165599 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:59.165624 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:01.723994 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:01.734966 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:01.735033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:01.759065 1550381 cri.go:89] found id: ""
	I1218 01:51:01.759093 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.759102 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:01.759108 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:01.759169 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:01.787378 1550381 cri.go:89] found id: ""
	I1218 01:51:01.787406 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.787416 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:01.787421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:01.787490 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:01.812815 1550381 cri.go:89] found id: ""
	I1218 01:51:01.812838 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.812847 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:01.812853 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:01.812912 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:01.838955 1550381 cri.go:89] found id: ""
	I1218 01:51:01.838981 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.838990 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:01.839003 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:01.839062 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:01.864230 1550381 cri.go:89] found id: ""
	I1218 01:51:01.864256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.864266 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:01.864273 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:01.864335 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:01.890158 1550381 cri.go:89] found id: ""
	I1218 01:51:01.890184 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.890193 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:01.890199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:01.890259 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:01.955214 1550381 cri.go:89] found id: ""
	I1218 01:51:01.955289 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.955313 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:01.955332 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:01.955421 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:01.997347 1550381 cri.go:89] found id: ""
	I1218 01:51:01.997414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.997439 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:01.997457 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:01.997469 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:02.054965 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:02.055055 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:02.074503 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:02.074555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:02.144467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:02.144499 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:02.144513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:02.170450 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:02.170493 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:04.704549 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:04.715641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:04.715714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:04.742904 1550381 cri.go:89] found id: ""
	I1218 01:51:04.742928 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.742937 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:04.742943 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:04.743002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:04.768296 1550381 cri.go:89] found id: ""
	I1218 01:51:04.768323 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.768332 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:04.768338 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:04.768400 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:04.794825 1550381 cri.go:89] found id: ""
	I1218 01:51:04.794859 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.794868 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:04.794888 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:04.794953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:04.820347 1550381 cri.go:89] found id: ""
	I1218 01:51:04.820375 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.820383 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:04.820390 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:04.820452 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:04.845796 1550381 cri.go:89] found id: ""
	I1218 01:51:04.845823 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.845832 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:04.845839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:04.845899 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:04.870392 1550381 cri.go:89] found id: ""
	I1218 01:51:04.870418 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.870426 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:04.870433 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:04.870495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:04.918945 1550381 cri.go:89] found id: ""
	I1218 01:51:04.918979 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.918988 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:04.918995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:04.919055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:04.974228 1550381 cri.go:89] found id: ""
	I1218 01:51:04.974255 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.974264 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:04.974273 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:04.974286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:05.042680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:05.042706 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:05.042719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:05.068392 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:05.068427 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:05.097162 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:05.097199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:05.155869 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:05.155910 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:07.671922 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:07.682619 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:07.682688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:07.707484 1550381 cri.go:89] found id: ""
	I1218 01:51:07.707512 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.707521 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:07.707528 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:07.707585 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:07.736732 1550381 cri.go:89] found id: ""
	I1218 01:51:07.736765 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.736774 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:07.736781 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:07.736841 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:07.761774 1550381 cri.go:89] found id: ""
	I1218 01:51:07.761800 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.761809 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:07.761815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:07.761876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:07.790605 1550381 cri.go:89] found id: ""
	I1218 01:51:07.790635 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.790644 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:07.790650 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:07.790714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:07.816203 1550381 cri.go:89] found id: ""
	I1218 01:51:07.816230 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.816239 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:07.816245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:07.816304 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:07.841127 1550381 cri.go:89] found id: ""
	I1218 01:51:07.841150 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.841159 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:07.841165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:07.841225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:07.865946 1550381 cri.go:89] found id: ""
	I1218 01:51:07.866010 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.866036 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:07.866053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:07.866143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:07.916531 1550381 cri.go:89] found id: ""
	I1218 01:51:07.916559 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.916568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:07.916578 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:07.916589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:07.983404 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:07.983433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:08.038790 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:08.038829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:08.055026 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:08.055100 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:08.121982 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:08.122053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:08.122079 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:10.648476 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:10.659206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:10.659275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:10.684487 1550381 cri.go:89] found id: ""
	I1218 01:51:10.684516 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.684525 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:10.684532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:10.684594 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:10.709248 1550381 cri.go:89] found id: ""
	I1218 01:51:10.709278 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.709288 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:10.709294 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:10.709354 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:10.733670 1550381 cri.go:89] found id: ""
	I1218 01:51:10.733700 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.733709 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:10.733716 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:10.733776 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:10.762711 1550381 cri.go:89] found id: ""
	I1218 01:51:10.762734 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.762748 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:10.762755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:10.762814 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:10.791896 1550381 cri.go:89] found id: ""
	I1218 01:51:10.791929 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.791938 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:10.791944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:10.792012 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:10.816916 1550381 cri.go:89] found id: ""
	I1218 01:51:10.816940 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.816951 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:10.816957 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:10.817018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:10.848467 1550381 cri.go:89] found id: ""
	I1218 01:51:10.848533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.848555 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:10.848575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:10.848684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:10.872632 1550381 cri.go:89] found id: ""
	I1218 01:51:10.872694 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.872710 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:10.872719 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:10.872731 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:10.932049 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:10.932119 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:11.006112 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:11.006150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:11.021573 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:11.021602 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:11.086764 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:11.086785 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:11.086798 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:13.613916 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:13.625018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:13.625093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:13.651186 1550381 cri.go:89] found id: ""
	I1218 01:51:13.651211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.651220 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:13.651226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:13.651289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:13.680145 1550381 cri.go:89] found id: ""
	I1218 01:51:13.680172 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.680181 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:13.680187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:13.680246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:13.706941 1550381 cri.go:89] found id: ""
	I1218 01:51:13.706970 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.706980 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:13.706986 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:13.707046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:13.735536 1550381 cri.go:89] found id: ""
	I1218 01:51:13.735562 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.735571 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:13.735578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:13.735637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:13.763111 1550381 cri.go:89] found id: ""
	I1218 01:51:13.763185 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.763209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:13.763227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:13.763313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:13.788754 1550381 cri.go:89] found id: ""
	I1218 01:51:13.788779 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.788787 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:13.788794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:13.788883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:13.813966 1550381 cri.go:89] found id: ""
	I1218 01:51:13.813989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.814004 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:13.814010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:13.814068 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:13.838881 1550381 cri.go:89] found id: ""
	I1218 01:51:13.838907 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.838915 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:13.838925 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:13.838936 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:13.869225 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:13.869250 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:13.928878 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:13.928917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:13.955609 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:13.955639 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:14.045680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:14.045710 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:14.045723 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:16.572096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:16.582596 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:16.582666 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:16.606933 1550381 cri.go:89] found id: ""
	I1218 01:51:16.606963 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.606972 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:16.606979 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:16.607038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:16.631960 1550381 cri.go:89] found id: ""
	I1218 01:51:16.631989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.632004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:16.632010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:16.632071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:16.659171 1550381 cri.go:89] found id: ""
	I1218 01:51:16.659198 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.659207 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:16.659213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:16.659269 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:16.689389 1550381 cri.go:89] found id: ""
	I1218 01:51:16.689414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.689422 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:16.689429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:16.689494 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:16.714209 1550381 cri.go:89] found id: ""
	I1218 01:51:16.714236 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.714246 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:16.714252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:16.714311 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:16.739422 1550381 cri.go:89] found id: ""
	I1218 01:51:16.739450 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.739461 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:16.739467 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:16.739529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:16.765164 1550381 cri.go:89] found id: ""
	I1218 01:51:16.765231 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.765256 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:16.765283 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:16.765372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:16.790914 1550381 cri.go:89] found id: ""
	I1218 01:51:16.790990 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.791014 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:16.791035 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:16.791063 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:16.848408 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:16.848446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:16.864121 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:16.864199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:16.967366 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:16.967436 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:16.967463 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:17.008108 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:17.008145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:19.540127 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:19.550917 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:19.550989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:19.574864 1550381 cri.go:89] found id: ""
	I1218 01:51:19.574939 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.574964 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:19.574978 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:19.575059 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:19.605362 1550381 cri.go:89] found id: ""
	I1218 01:51:19.605386 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.605395 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:19.605401 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:19.605465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:19.631747 1550381 cri.go:89] found id: ""
	I1218 01:51:19.631774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.631789 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:19.631795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:19.631870 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:19.656716 1550381 cri.go:89] found id: ""
	I1218 01:51:19.656740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.656749 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:19.656755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:19.656813 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:19.689179 1550381 cri.go:89] found id: ""
	I1218 01:51:19.689206 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.689215 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:19.689221 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:19.689292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:19.713751 1550381 cri.go:89] found id: ""
	I1218 01:51:19.713774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.713783 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:19.713789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:19.713846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:19.737993 1550381 cri.go:89] found id: ""
	I1218 01:51:19.738063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.738074 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:19.738081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:19.738150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:19.763540 1550381 cri.go:89] found id: ""
	I1218 01:51:19.763565 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.763574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:19.763583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:19.763618 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:19.818946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:19.818982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:19.834461 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:19.834487 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:19.932671 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:19.932695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:19.932708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:19.986050 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:19.986085 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:22.530737 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:22.542075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:22.542151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:22.567921 1550381 cri.go:89] found id: ""
	I1218 01:51:22.567945 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.567953 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:22.567960 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:22.568020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:22.595894 1550381 cri.go:89] found id: ""
	I1218 01:51:22.595919 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.595928 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:22.595933 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:22.595991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:22.620929 1550381 cri.go:89] found id: ""
	I1218 01:51:22.620953 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.620968 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:22.620974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:22.621040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:22.646170 1550381 cri.go:89] found id: ""
	I1218 01:51:22.646195 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.646203 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:22.646210 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:22.646270 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:22.675272 1550381 cri.go:89] found id: ""
	I1218 01:51:22.675296 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.675305 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:22.675312 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:22.675376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:22.702994 1550381 cri.go:89] found id: ""
	I1218 01:51:22.703023 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.703033 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:22.703039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:22.703106 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:22.728507 1550381 cri.go:89] found id: ""
	I1218 01:51:22.728533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.728542 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:22.728548 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:22.728608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:22.754134 1550381 cri.go:89] found id: ""
	I1218 01:51:22.754157 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.754165 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:22.754175 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:22.754187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:22.810488 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:22.810539 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:22.826174 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:22.826212 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:22.906393 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:22.906431 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:22.906448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:22.948969 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:22.949025 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:25.504885 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:25.515607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:25.515676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:25.539969 1550381 cri.go:89] found id: ""
	I1218 01:51:25.539994 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.540003 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:25.540010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:25.540076 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:25.565160 1550381 cri.go:89] found id: ""
	I1218 01:51:25.565189 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.565198 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:25.565204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:25.565262 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:25.593521 1550381 cri.go:89] found id: ""
	I1218 01:51:25.593545 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.593554 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:25.593560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:25.593625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:25.618492 1550381 cri.go:89] found id: ""
	I1218 01:51:25.618523 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.618532 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:25.618538 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:25.618600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:25.642784 1550381 cri.go:89] found id: ""
	I1218 01:51:25.642810 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.642819 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:25.642825 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:25.642885 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:25.667732 1550381 cri.go:89] found id: ""
	I1218 01:51:25.667759 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.667768 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:25.667778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:25.667843 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:25.695444 1550381 cri.go:89] found id: ""
	I1218 01:51:25.695468 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.695477 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:25.695483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:25.695540 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:25.720467 1550381 cri.go:89] found id: ""
	I1218 01:51:25.720492 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.720501 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:25.720510 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:25.720522 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:25.777380 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:25.777416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:25.793106 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:25.793135 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:25.859796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:25.859817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:25.859829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:25.885375 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:25.885414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:28.480490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:28.491517 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:28.491587 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:28.528988 1550381 cri.go:89] found id: ""
	I1218 01:51:28.529011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.529020 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:28.529027 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:28.529088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:28.554389 1550381 cri.go:89] found id: ""
	I1218 01:51:28.554415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.554423 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:28.554429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:28.554491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:28.595339 1550381 cri.go:89] found id: ""
	I1218 01:51:28.595365 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.595374 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:28.595380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:28.595440 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:28.620349 1550381 cri.go:89] found id: ""
	I1218 01:51:28.620376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.620384 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:28.620391 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:28.620451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:28.644815 1550381 cri.go:89] found id: ""
	I1218 01:51:28.644844 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.644854 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:28.644862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:28.644923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:28.669719 1550381 cri.go:89] found id: ""
	I1218 01:51:28.669746 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.669755 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:28.669762 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:28.669822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:28.694390 1550381 cri.go:89] found id: ""
	I1218 01:51:28.694415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.694424 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:28.694430 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:28.694491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:28.719213 1550381 cri.go:89] found id: ""
	I1218 01:51:28.719238 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.719247 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:28.719257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:28.719268 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:28.777972 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:28.778010 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:28.792667 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:28.792698 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:28.863732 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:28.863755 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:28.863768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:28.896538 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:28.896571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.484234 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:31.494710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:31.494781 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:31.519036 1550381 cri.go:89] found id: ""
	I1218 01:51:31.519061 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.519070 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:31.519077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:31.519136 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:31.543677 1550381 cri.go:89] found id: ""
	I1218 01:51:31.543702 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.543710 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:31.543717 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:31.543778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:31.570267 1550381 cri.go:89] found id: ""
	I1218 01:51:31.570299 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.570308 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:31.570315 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:31.570406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:31.597988 1550381 cri.go:89] found id: ""
	I1218 01:51:31.598024 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.598034 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:31.598040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:31.598102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:31.625949 1550381 cri.go:89] found id: ""
	I1218 01:51:31.625983 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.625993 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:31.626014 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:31.626097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:31.654833 1550381 cri.go:89] found id: ""
	I1218 01:51:31.654898 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.654923 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:31.654937 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:31.655011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:31.686105 1550381 cri.go:89] found id: ""
	I1218 01:51:31.686132 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.686143 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:31.686149 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:31.686233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:31.711106 1550381 cri.go:89] found id: ""
	I1218 01:51:31.711139 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.711148 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:31.711158 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:31.711187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:31.725923 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:31.725952 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:31.789766 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:31.789789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:31.789801 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:31.815524 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:31.815558 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.843690 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:31.843718 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.403611 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:34.414490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:34.414564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:34.438520 1550381 cri.go:89] found id: ""
	I1218 01:51:34.438544 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.438552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:34.438562 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:34.438625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:34.462603 1550381 cri.go:89] found id: ""
	I1218 01:51:34.462627 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.462636 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:34.462642 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:34.462699 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:34.490371 1550381 cri.go:89] found id: ""
	I1218 01:51:34.490395 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.490404 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:34.490410 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:34.490471 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:34.513456 1550381 cri.go:89] found id: ""
	I1218 01:51:34.513480 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.513488 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:34.513495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:34.513562 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:34.537361 1550381 cri.go:89] found id: ""
	I1218 01:51:34.537385 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.537394 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:34.537407 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:34.537468 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:34.561230 1550381 cri.go:89] found id: ""
	I1218 01:51:34.561253 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.561261 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:34.561268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:34.561348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:34.585180 1550381 cri.go:89] found id: ""
	I1218 01:51:34.585204 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.585212 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:34.585219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:34.585280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:34.609741 1550381 cri.go:89] found id: ""
	I1218 01:51:34.609766 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.609775 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:34.609785 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:34.609802 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.667204 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:34.667238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:34.682240 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:34.682269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:34.745795 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:34.745817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:34.745831 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:34.771222 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:34.771256 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.302139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:37.313213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:37.313316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:37.348873 1550381 cri.go:89] found id: ""
	I1218 01:51:37.348895 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.348903 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:37.348909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:37.348966 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:37.374229 1550381 cri.go:89] found id: ""
	I1218 01:51:37.374256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.374265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:37.374271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:37.374332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:37.398897 1550381 cri.go:89] found id: ""
	I1218 01:51:37.398920 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.398928 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:37.398935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:37.398991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:37.422904 1550381 cri.go:89] found id: ""
	I1218 01:51:37.422930 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.422939 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:37.422946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:37.423010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:37.451168 1550381 cri.go:89] found id: ""
	I1218 01:51:37.451196 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.451205 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:37.451211 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:37.451273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:37.477986 1550381 cri.go:89] found id: ""
	I1218 01:51:37.478011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.478021 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:37.478028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:37.478096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:37.504463 1550381 cri.go:89] found id: ""
	I1218 01:51:37.504487 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.504497 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:37.504503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:37.504563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:37.529381 1550381 cri.go:89] found id: ""
	I1218 01:51:37.529405 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.529414 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:37.529423 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:37.529435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:37.598285 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:37.598307 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:37.598319 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:37.623017 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:37.623052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.654645 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:37.654674 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:37.711304 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:37.711339 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.226741 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:40.238408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:40.238480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:40.263769 1550381 cri.go:89] found id: ""
	I1218 01:51:40.263795 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.263804 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:40.263810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:40.263896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:40.289194 1550381 cri.go:89] found id: ""
	I1218 01:51:40.289220 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.289228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:40.289234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:40.289292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:40.314040 1550381 cri.go:89] found id: ""
	I1218 01:51:40.314064 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.314073 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:40.314079 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:40.314137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:40.339145 1550381 cri.go:89] found id: ""
	I1218 01:51:40.339180 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.339189 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:40.339212 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:40.339293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:40.364902 1550381 cri.go:89] found id: ""
	I1218 01:51:40.364931 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.364940 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:40.364947 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:40.365009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:40.389709 1550381 cri.go:89] found id: ""
	I1218 01:51:40.389730 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.389739 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:40.389745 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:40.389804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:40.414858 1550381 cri.go:89] found id: ""
	I1218 01:51:40.414882 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.414891 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:40.414898 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:40.414958 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:40.441847 1550381 cri.go:89] found id: ""
	I1218 01:51:40.441875 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.441884 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:40.441893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:40.441906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.456791 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:40.456821 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:40.525853 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:40.525876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:40.525889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:40.550993 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:40.551028 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:40.581756 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:40.581786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.139640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:43.166426 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:43.166501 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:43.205967 1550381 cri.go:89] found id: ""
	I1218 01:51:43.206046 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.206071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:43.206091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:43.206223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:43.234922 1550381 cri.go:89] found id: ""
	I1218 01:51:43.234950 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.234958 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:43.234964 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:43.235023 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:43.261353 1550381 cri.go:89] found id: ""
	I1218 01:51:43.261376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.261385 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:43.261392 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:43.261482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:43.286879 1550381 cri.go:89] found id: ""
	I1218 01:51:43.286906 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.286915 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:43.286922 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:43.286982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:43.312530 1550381 cri.go:89] found id: ""
	I1218 01:51:43.312554 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.312568 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:43.312575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:43.312667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:43.337185 1550381 cri.go:89] found id: ""
	I1218 01:51:43.337207 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.337217 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:43.337223 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:43.337280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:43.361707 1550381 cri.go:89] found id: ""
	I1218 01:51:43.361731 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.361741 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:43.361747 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:43.361805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:43.391450 1550381 cri.go:89] found id: ""
	I1218 01:51:43.391483 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.391492 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:43.391502 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:43.391513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.449067 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:43.449104 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:43.464299 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:43.464329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:43.534945 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:43.534968 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:43.534980 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:43.560324 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:43.560357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:46.089618 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:46.100369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:46.100466 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:46.125679 1550381 cri.go:89] found id: ""
	I1218 01:51:46.125705 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.125714 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:46.125722 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:46.125789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:46.187262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.187300 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.187310 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:46.187317 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:46.187376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:46.244106 1550381 cri.go:89] found id: ""
	I1218 01:51:46.244130 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.244139 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:46.244145 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:46.244212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:46.269674 1550381 cri.go:89] found id: ""
	I1218 01:51:46.269740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.269769 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:46.269787 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:46.269876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:46.299177 1550381 cri.go:89] found id: ""
	I1218 01:51:46.299199 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.299209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:46.299215 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:46.299273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:46.328469 1550381 cri.go:89] found id: ""
	I1218 01:51:46.328491 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.328499 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:46.328506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:46.328564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:46.354262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.354288 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.354297 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:46.354304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:46.354362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:46.378724 1550381 cri.go:89] found id: ""
	I1218 01:51:46.378752 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.378761 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:46.378770 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:46.378781 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:46.433721 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:46.433759 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:46.448259 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:46.448295 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:46.511060 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:46.511081 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:46.511093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:46.536601 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:46.536803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.070137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:49.081049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:49.081123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:49.106438 1550381 cri.go:89] found id: ""
	I1218 01:51:49.106465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.106474 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:49.106483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:49.106546 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:49.131233 1550381 cri.go:89] found id: ""
	I1218 01:51:49.131257 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.131265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:49.131272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:49.131337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:49.194204 1550381 cri.go:89] found id: ""
	I1218 01:51:49.194233 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.194242 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:49.194248 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:49.194310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:49.244013 1550381 cri.go:89] found id: ""
	I1218 01:51:49.244039 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.244048 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:49.244054 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:49.244120 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:49.271185 1550381 cri.go:89] found id: ""
	I1218 01:51:49.271211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.271219 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:49.271226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:49.271288 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:49.298143 1550381 cri.go:89] found id: ""
	I1218 01:51:49.298170 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.298180 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:49.298187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:49.298251 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:49.324346 1550381 cri.go:89] found id: ""
	I1218 01:51:49.324374 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.324383 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:49.324389 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:49.324450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:49.350033 1550381 cri.go:89] found id: ""
	I1218 01:51:49.350063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.350072 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:49.350081 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:49.350094 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.382558 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:49.382589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:49.438756 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:49.438795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:49.453736 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:49.453765 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:49.515649 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:49.515672 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:49.515684 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:52.041321 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:52.052329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:52.052403 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:52.082403 1550381 cri.go:89] found id: ""
	I1218 01:51:52.082434 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.082444 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:52.082451 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:52.082513 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:52.108691 1550381 cri.go:89] found id: ""
	I1218 01:51:52.108720 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.108729 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:52.108735 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:52.108795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:52.138279 1550381 cri.go:89] found id: ""
	I1218 01:51:52.138314 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.138323 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:52.138329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:52.138393 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:52.207039 1550381 cri.go:89] found id: ""
	I1218 01:51:52.207067 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.207076 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:52.207083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:52.207150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:52.236007 1550381 cri.go:89] found id: ""
	I1218 01:51:52.236042 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.236052 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:52.236059 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:52.236125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:52.267547 1550381 cri.go:89] found id: ""
	I1218 01:51:52.267583 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.267593 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:52.267599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:52.267668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:52.295275 1550381 cri.go:89] found id: ""
	I1218 01:51:52.295310 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.295320 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:52.295326 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:52.295407 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:52.324187 1550381 cri.go:89] found id: ""
	I1218 01:51:52.324215 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.324224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:52.324234 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:52.324246 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:52.352151 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:52.352182 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:52.408412 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:52.408446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:52.423024 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:52.423098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:52.488577 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:52.488599 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:52.488613 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.015396 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:55.026777 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:55.026851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:55.052687 1550381 cri.go:89] found id: ""
	I1218 01:51:55.052713 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.052722 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:55.052728 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:55.052786 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:55.082492 1550381 cri.go:89] found id: ""
	I1218 01:51:55.082515 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.082524 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:55.082531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:55.082592 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:55.107565 1550381 cri.go:89] found id: ""
	I1218 01:51:55.107592 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.107600 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:55.107607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:55.107674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:55.135213 1550381 cri.go:89] found id: ""
	I1218 01:51:55.135241 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.135249 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:55.135270 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:55.135332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:55.177099 1550381 cri.go:89] found id: ""
	I1218 01:51:55.177128 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.177137 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:55.177143 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:55.177210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:55.224917 1550381 cri.go:89] found id: ""
	I1218 01:51:55.224946 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.224954 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:55.224961 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:55.225020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:55.252438 1550381 cri.go:89] found id: ""
	I1218 01:51:55.252465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.252473 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:55.252479 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:55.252538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:55.277054 1550381 cri.go:89] found id: ""
	I1218 01:51:55.277074 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.277082 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:55.277091 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:55.277106 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:55.292214 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:55.292240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:55.354379 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:55.354401 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:55.354412 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.379112 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:55.379143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:55.407257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:55.407284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:57.964281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:57.975020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:57.975088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:58.005630 1550381 cri.go:89] found id: ""
	I1218 01:51:58.005658 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.005667 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:58.005674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:58.005745 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:58.032296 1550381 cri.go:89] found id: ""
	I1218 01:51:58.032319 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.032329 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:58.032335 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:58.032402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:58.061454 1550381 cri.go:89] found id: ""
	I1218 01:51:58.061479 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.061488 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:58.061495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:58.061554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:58.087783 1550381 cri.go:89] found id: ""
	I1218 01:51:58.087808 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.087817 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:58.087824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:58.087884 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:58.115473 1550381 cri.go:89] found id: ""
	I1218 01:51:58.115496 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.115505 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:58.115512 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:58.115599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:58.152731 1550381 cri.go:89] found id: ""
	I1218 01:51:58.152757 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.152766 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:58.152773 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:58.152832 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:58.207262 1550381 cri.go:89] found id: ""
	I1218 01:51:58.207284 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.207302 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:58.207310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:58.207367 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:58.244074 1550381 cri.go:89] found id: ""
	I1218 01:51:58.244103 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.244112 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:58.244121 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:58.244133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:58.305417 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:58.305455 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:58.320298 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:58.320326 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:58.392177 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:58.392200 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:58.392215 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:58.418264 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:58.418299 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:00.947037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:00.958414 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:00.958504 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:00.982432 1550381 cri.go:89] found id: ""
	I1218 01:52:00.982456 1550381 logs.go:282] 0 containers: []
	W1218 01:52:00.982465 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:00.982472 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:00.982554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:01.011620 1550381 cri.go:89] found id: ""
	I1218 01:52:01.011645 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.011654 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:01.011661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:01.011721 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:01.038538 1550381 cri.go:89] found id: ""
	I1218 01:52:01.038564 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.038572 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:01.038578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:01.038636 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:01.062732 1550381 cri.go:89] found id: ""
	I1218 01:52:01.062758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.062768 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:01.062775 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:01.062836 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:01.088130 1550381 cri.go:89] found id: ""
	I1218 01:52:01.088156 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.088165 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:01.088172 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:01.088241 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:01.116412 1550381 cri.go:89] found id: ""
	I1218 01:52:01.116440 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.116450 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:01.116471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:01.116532 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:01.157710 1550381 cri.go:89] found id: ""
	I1218 01:52:01.157737 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.157747 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:01.157754 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:01.157815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:01.207757 1550381 cri.go:89] found id: ""
	I1218 01:52:01.207784 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.207794 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:01.207803 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:01.207815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:01.293467 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:01.293515 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:01.308790 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:01.308825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:01.377467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:01.377487 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:01.377501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:01.403688 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:01.403722 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:03.936540 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:03.947485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:03.947559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:03.972917 1550381 cri.go:89] found id: ""
	I1218 01:52:03.972939 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.972947 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:03.972953 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:03.973018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:03.997960 1550381 cri.go:89] found id: ""
	I1218 01:52:03.997983 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.997992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:03.997998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:03.998056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:04.027683 1550381 cri.go:89] found id: ""
	I1218 01:52:04.027754 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.027780 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:04.027808 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:04.027916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:04.054769 1550381 cri.go:89] found id: ""
	I1218 01:52:04.054833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.054843 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:04.054849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:04.054917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:04.081260 1550381 cri.go:89] found id: ""
	I1218 01:52:04.081284 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.081293 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:04.081299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:04.081372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:04.106563 1550381 cri.go:89] found id: ""
	I1218 01:52:04.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.106599 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:04.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:04.106667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:04.131682 1550381 cri.go:89] found id: ""
	I1218 01:52:04.131708 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.131717 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:04.131724 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:04.131790 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:04.170215 1550381 cri.go:89] found id: ""
	I1218 01:52:04.170242 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.170251 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:04.170260 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:04.170273 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:04.211169 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:04.211207 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:04.263603 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:04.263636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:04.319257 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:04.319294 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:04.334300 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:04.334329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:04.399992 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:06.900248 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:06.910997 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:06.911067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:06.935514 1550381 cri.go:89] found id: ""
	I1218 01:52:06.935539 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.935548 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:06.935554 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:06.935612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:06.959911 1550381 cri.go:89] found id: ""
	I1218 01:52:06.959933 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.959942 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:06.959949 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:06.960006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:06.989689 1550381 cri.go:89] found id: ""
	I1218 01:52:06.989710 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.989719 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:06.989725 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:06.989783 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:07.016553 1550381 cri.go:89] found id: ""
	I1218 01:52:07.016578 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.016587 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:07.016594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:07.016676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:07.042084 1550381 cri.go:89] found id: ""
	I1218 01:52:07.042106 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.042115 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:07.042121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:07.042179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:07.067075 1550381 cri.go:89] found id: ""
	I1218 01:52:07.067097 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.067107 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:07.067113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:07.067176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:07.096366 1550381 cri.go:89] found id: ""
	I1218 01:52:07.096388 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.096398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:07.096405 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:07.096465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:07.125403 1550381 cri.go:89] found id: ""
	I1218 01:52:07.125426 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.125434 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:07.125444 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:07.125456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:07.146124 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:07.146152 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:07.254257 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:07.254280 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:07.254292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:07.280552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:07.280590 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:07.307796 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:07.307825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:09.873637 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:09.884205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:09.884275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:09.909771 1550381 cri.go:89] found id: ""
	I1218 01:52:09.909796 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.909805 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:09.909812 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:09.909869 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:09.934051 1550381 cri.go:89] found id: ""
	I1218 01:52:09.934082 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.934092 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:09.934098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:09.934161 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:09.964504 1550381 cri.go:89] found id: ""
	I1218 01:52:09.964528 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.964550 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:09.964561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:09.964662 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:09.990501 1550381 cri.go:89] found id: ""
	I1218 01:52:09.990525 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.990534 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:09.990543 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:09.990616 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:10.028312 1550381 cri.go:89] found id: ""
	I1218 01:52:10.028339 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.028348 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:10.028355 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:10.028419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:10.054415 1550381 cri.go:89] found id: ""
	I1218 01:52:10.054443 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.054453 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:10.054460 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:10.054545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:10.085976 1550381 cri.go:89] found id: ""
	I1218 01:52:10.086003 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.086013 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:10.086020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:10.086081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:10.112422 1550381 cri.go:89] found id: ""
	I1218 01:52:10.112455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.112464 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:10.112473 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:10.112485 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:10.214552 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:10.214579 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:10.214591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:10.245834 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:10.245872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:10.278949 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:10.278983 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:10.338117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:10.338153 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:12.853298 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:12.863919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:12.864003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:12.888289 1550381 cri.go:89] found id: ""
	I1218 01:52:12.888315 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.888324 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:12.888330 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:12.888389 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:12.914281 1550381 cri.go:89] found id: ""
	I1218 01:52:12.914306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.914315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:12.914321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:12.914384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:12.941058 1550381 cri.go:89] found id: ""
	I1218 01:52:12.941083 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.941092 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:12.941098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:12.941160 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:12.966998 1550381 cri.go:89] found id: ""
	I1218 01:52:12.967022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.967030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:12.967037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:12.967095 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:12.996005 1550381 cri.go:89] found id: ""
	I1218 01:52:12.996027 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.996036 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:12.996042 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:12.996099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:13.023321 1550381 cri.go:89] found id: ""
	I1218 01:52:13.023345 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.023354 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:13.023360 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:13.023429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:13.049195 1550381 cri.go:89] found id: ""
	I1218 01:52:13.049220 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.049229 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:13.049235 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:13.049295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:13.074787 1550381 cri.go:89] found id: ""
	I1218 01:52:13.074816 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.074825 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:13.074835 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:13.074874 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:13.131893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:13.131926 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:13.159867 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:13.159942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:13.281047 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:13.281070 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:13.281089 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:13.307183 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:13.307217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:15.837707 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:15.848404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:15.848478 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:15.873587 1550381 cri.go:89] found id: ""
	I1218 01:52:15.873615 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.873624 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:15.873630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:15.873689 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:15.897757 1550381 cri.go:89] found id: ""
	I1218 01:52:15.897780 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.897788 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:15.897795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:15.897852 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:15.923098 1550381 cri.go:89] found id: ""
	I1218 01:52:15.923123 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.923132 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:15.923138 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:15.923231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:15.952891 1550381 cri.go:89] found id: ""
	I1218 01:52:15.952921 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.952929 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:15.952935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:15.952991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:15.979178 1550381 cri.go:89] found id: ""
	I1218 01:52:15.979204 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.979212 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:15.979218 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:15.979276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:16.007995 1550381 cri.go:89] found id: ""
	I1218 01:52:16.008022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.008031 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:16.008038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:16.008101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:16.032581 1550381 cri.go:89] found id: ""
	I1218 01:52:16.032607 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.032616 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:16.032641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:16.032709 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:16.058847 1550381 cri.go:89] found id: ""
	I1218 01:52:16.058872 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.058881 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:16.058891 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:16.058902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:16.116382 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:16.116416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:16.131483 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:16.131513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:16.233031 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:16.233053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:16.233066 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:16.262932 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:16.262966 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:18.790616 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:18.801658 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:18.801729 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:18.830076 1550381 cri.go:89] found id: ""
	I1218 01:52:18.830102 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.830112 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:18.830118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:18.830179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:18.855278 1550381 cri.go:89] found id: ""
	I1218 01:52:18.855306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.855315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:18.855321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:18.855380 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:18.886976 1550381 cri.go:89] found id: ""
	I1218 01:52:18.886998 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.887012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:18.887018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:18.887078 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:18.911656 1550381 cri.go:89] found id: ""
	I1218 01:52:18.911678 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.911686 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:18.911692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:18.911750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:18.935981 1550381 cri.go:89] found id: ""
	I1218 01:52:18.936002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.936011 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:18.936017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:18.936074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:18.960773 1550381 cri.go:89] found id: ""
	I1218 01:52:18.960795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.960804 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:18.960811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:18.960871 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:18.985996 1550381 cri.go:89] found id: ""
	I1218 01:52:18.986023 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.986032 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:18.986039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:18.986101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:19.011618 1550381 cri.go:89] found id: ""
	I1218 01:52:19.011696 1550381 logs.go:282] 0 containers: []
	W1218 01:52:19.011719 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:19.011740 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:19.011766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:19.027064 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:19.027093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:19.094483 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:19.094507 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:19.094519 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:19.120053 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:19.120087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:19.190394 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:19.190426 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:21.774413 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:21.785229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:21.785300 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:21.814294 1550381 cri.go:89] found id: ""
	I1218 01:52:21.814316 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.814325 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:21.814331 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:21.814394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:21.840168 1550381 cri.go:89] found id: ""
	I1218 01:52:21.840191 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.840200 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:21.840207 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:21.840267 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:21.865098 1550381 cri.go:89] found id: ""
	I1218 01:52:21.865120 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.865129 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:21.865134 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:21.865198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:21.890513 1550381 cri.go:89] found id: ""
	I1218 01:52:21.890535 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.890543 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:21.890550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:21.890607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:21.915362 1550381 cri.go:89] found id: ""
	I1218 01:52:21.915384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.915393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:21.915399 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:21.915457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:21.941078 1550381 cri.go:89] found id: ""
	I1218 01:52:21.941101 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.941110 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:21.941117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:21.941182 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:21.965276 1550381 cri.go:89] found id: ""
	I1218 01:52:21.965302 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.965311 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:21.965318 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:21.965375 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:21.990348 1550381 cri.go:89] found id: ""
	I1218 01:52:21.990370 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.990378 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:21.990387 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:21.990398 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:22.046097 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:22.046132 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:22.061468 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:22.061498 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:22.129867 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:22.129889 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:22.129901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:22.160943 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:22.160982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:24.703063 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:24.713938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:24.714009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:24.739085 1550381 cri.go:89] found id: ""
	I1218 01:52:24.739167 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.739189 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:24.739209 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:24.739298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:24.763316 1550381 cri.go:89] found id: ""
	I1218 01:52:24.763359 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.763368 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:24.763374 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:24.763443 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:24.789401 1550381 cri.go:89] found id: ""
	I1218 01:52:24.789431 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.789441 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:24.789471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:24.789558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:24.819426 1550381 cri.go:89] found id: ""
	I1218 01:52:24.819458 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.819468 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:24.819474 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:24.819547 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:24.844106 1550381 cri.go:89] found id: ""
	I1218 01:52:24.844143 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.844152 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:24.844159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:24.844230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:24.868116 1550381 cri.go:89] found id: ""
	I1218 01:52:24.868140 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.868149 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:24.868156 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:24.868213 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:24.892247 1550381 cri.go:89] found id: ""
	I1218 01:52:24.892280 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.892289 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:24.892311 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:24.892390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:24.917988 1550381 cri.go:89] found id: ""
	I1218 01:52:24.918013 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.918022 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:24.918031 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:24.918060 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:24.972539 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:24.972571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:24.987364 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:24.987391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:25.066535 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:25.066557 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:25.066572 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:25.093529 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:25.093573 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.627215 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:27.637795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:27.637864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:27.661825 1550381 cri.go:89] found id: ""
	I1218 01:52:27.661850 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.661859 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:27.661866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:27.661931 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:27.688769 1550381 cri.go:89] found id: ""
	I1218 01:52:27.688795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.688803 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:27.688810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:27.688895 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:27.714909 1550381 cri.go:89] found id: ""
	I1218 01:52:27.714992 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.715009 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:27.715017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:27.715080 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:27.742595 1550381 cri.go:89] found id: ""
	I1218 01:52:27.742620 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.742628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:27.742636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:27.742695 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:27.768328 1550381 cri.go:89] found id: ""
	I1218 01:52:27.768353 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.768361 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:27.768368 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:27.768444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:27.794968 1550381 cri.go:89] found id: ""
	I1218 01:52:27.794993 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.795003 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:27.795010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:27.795094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:27.821560 1550381 cri.go:89] found id: ""
	I1218 01:52:27.821587 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.821597 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:27.821603 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:27.821679 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:27.846888 1550381 cri.go:89] found id: ""
	I1218 01:52:27.846912 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.846921 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:27.846930 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:27.846942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:27.861757 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:27.861785 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:27.926373 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:27.926400 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:27.926413 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:27.951763 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:27.951803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.984249 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:27.984278 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.543132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:30.553809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:30.553883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:30.580729 1550381 cri.go:89] found id: ""
	I1218 01:52:30.580758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.580767 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:30.580774 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:30.580837 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:30.611455 1550381 cri.go:89] found id: ""
	I1218 01:52:30.611479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.611488 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:30.611494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:30.611558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:30.637976 1550381 cri.go:89] found id: ""
	I1218 01:52:30.638002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.638025 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:30.638049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:30.638134 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:30.663110 1550381 cri.go:89] found id: ""
	I1218 01:52:30.663135 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.663144 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:30.663150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:30.663211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:30.689367 1550381 cri.go:89] found id: ""
	I1218 01:52:30.689391 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.689401 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:30.689416 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:30.689480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:30.714721 1550381 cri.go:89] found id: ""
	I1218 01:52:30.714747 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.714756 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:30.714764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:30.714826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:30.740391 1550381 cri.go:89] found id: ""
	I1218 01:52:30.740419 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.740428 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:30.740438 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:30.740502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:30.769197 1550381 cri.go:89] found id: ""
	I1218 01:52:30.769264 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.769286 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:30.769306 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:30.769337 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.825762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:30.825799 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:30.840467 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:30.840497 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:30.907063 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:30.907085 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:30.907098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:30.933175 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:30.933208 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.464940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:33.477904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:33.477982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:33.502677 1550381 cri.go:89] found id: ""
	I1218 01:52:33.502703 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.502711 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:33.502718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:33.502778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:33.528314 1550381 cri.go:89] found id: ""
	I1218 01:52:33.528341 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.528350 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:33.528356 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:33.528418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:33.554186 1550381 cri.go:89] found id: ""
	I1218 01:52:33.554213 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.554221 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:33.554227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:33.554286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:33.578717 1550381 cri.go:89] found id: ""
	I1218 01:52:33.578740 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.578751 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:33.578758 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:33.578819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:33.603980 1550381 cri.go:89] found id: ""
	I1218 01:52:33.604054 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.604079 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:33.604098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:33.604287 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:33.629122 1550381 cri.go:89] found id: ""
	I1218 01:52:33.629149 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.629158 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:33.629165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:33.629248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:33.660229 1550381 cri.go:89] found id: ""
	I1218 01:52:33.660266 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.660281 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:33.660288 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:33.660356 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:33.685746 1550381 cri.go:89] found id: ""
	I1218 01:52:33.685812 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.685838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:33.685854 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:33.685866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.717052 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:33.717078 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:33.777106 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:33.777142 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:33.791689 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:33.791719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:33.855601 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:33.855621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:33.855633 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:36.380440 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:36.395133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:36.395206 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:36.463112 1550381 cri.go:89] found id: ""
	I1218 01:52:36.463145 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.463154 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:36.463162 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:36.463235 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:36.489631 1550381 cri.go:89] found id: ""
	I1218 01:52:36.489656 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.489665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:36.489671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:36.489733 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:36.515149 1550381 cri.go:89] found id: ""
	I1218 01:52:36.515175 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.515186 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:36.515192 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:36.515253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:36.543702 1550381 cri.go:89] found id: ""
	I1218 01:52:36.543727 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.543736 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:36.543743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:36.543802 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:36.568359 1550381 cri.go:89] found id: ""
	I1218 01:52:36.568384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.568393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:36.568400 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:36.568457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:36.591933 1550381 cri.go:89] found id: ""
	I1218 01:52:36.591959 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.591968 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:36.591974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:36.592033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:36.619454 1550381 cri.go:89] found id: ""
	I1218 01:52:36.619479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.619488 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:36.619494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:36.619552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:36.644231 1550381 cri.go:89] found id: ""
	I1218 01:52:36.644256 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.644265 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:36.644274 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:36.644286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:36.673981 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:36.674008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:36.730614 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:36.730648 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:36.745581 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:36.745614 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:36.808564 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:36.808591 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:36.808604 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.334388 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:39.345831 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:39.345904 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:39.374463 1550381 cri.go:89] found id: ""
	I1218 01:52:39.374486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.374495 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:39.374501 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:39.374567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:39.439153 1550381 cri.go:89] found id: ""
	I1218 01:52:39.439178 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.439187 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:39.439196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:39.439255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:39.483631 1550381 cri.go:89] found id: ""
	I1218 01:52:39.483655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.483664 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:39.483670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:39.483746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:39.513656 1550381 cri.go:89] found id: ""
	I1218 01:52:39.513681 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.513689 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:39.513695 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:39.513757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:39.538364 1550381 cri.go:89] found id: ""
	I1218 01:52:39.538389 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.538397 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:39.538404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:39.538469 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:39.562963 1550381 cri.go:89] found id: ""
	I1218 01:52:39.562989 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.562997 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:39.563004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:39.563063 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:39.590225 1550381 cri.go:89] found id: ""
	I1218 01:52:39.590247 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.590255 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:39.590261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:39.590317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:39.619590 1550381 cri.go:89] found id: ""
	I1218 01:52:39.619613 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.619622 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:39.619631 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:39.619642 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.645098 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:39.645133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:39.675338 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:39.675370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:39.731953 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:39.731988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:39.746929 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:39.746957 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:39.815336 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.315631 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:42.327549 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:42.327635 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:42.355093 1550381 cri.go:89] found id: ""
	I1218 01:52:42.355117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.355126 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:42.355133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:42.355193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:42.383724 1550381 cri.go:89] found id: ""
	I1218 01:52:42.383746 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.383755 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:42.383763 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:42.383822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:42.439728 1550381 cri.go:89] found id: ""
	I1218 01:52:42.439752 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.439761 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:42.439767 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:42.439826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:42.485723 1550381 cri.go:89] found id: ""
	I1218 01:52:42.485751 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.485760 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:42.485766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:42.485835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:42.518003 1550381 cri.go:89] found id: ""
	I1218 01:52:42.518030 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.518040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:42.518046 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:42.518105 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:42.542509 1550381 cri.go:89] found id: ""
	I1218 01:52:42.542534 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.542543 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:42.542550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:42.542608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:42.567103 1550381 cri.go:89] found id: ""
	I1218 01:52:42.567127 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.567135 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:42.567144 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:42.567210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:42.591556 1550381 cri.go:89] found id: ""
	I1218 01:52:42.591623 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.591648 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:42.591670 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:42.591708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:42.622840 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:42.622867 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:42.677917 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:42.677950 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:42.692666 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:42.692699 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:42.765474 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.765497 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:42.765509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.291290 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:45.308807 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:45.308972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:45.342117 1550381 cri.go:89] found id: ""
	I1218 01:52:45.342151 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.342160 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:45.342168 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:45.342233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:45.370490 1550381 cri.go:89] found id: ""
	I1218 01:52:45.370516 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.370525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:45.370531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:45.370612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:45.416227 1550381 cri.go:89] found id: ""
	I1218 01:52:45.416262 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.416272 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:45.416278 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:45.416359 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:45.475986 1550381 cri.go:89] found id: ""
	I1218 01:52:45.476010 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.476019 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:45.476026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:45.476089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:45.505307 1550381 cri.go:89] found id: ""
	I1218 01:52:45.505375 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.505400 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:45.505419 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:45.505520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:45.531649 1550381 cri.go:89] found id: ""
	I1218 01:52:45.531676 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.531685 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:45.531691 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:45.531762 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:45.557231 1550381 cri.go:89] found id: ""
	I1218 01:52:45.557258 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.557268 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:45.557274 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:45.557332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:45.581819 1550381 cri.go:89] found id: ""
	I1218 01:52:45.581846 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.581855 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:45.581864 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:45.581876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:45.637946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:45.637982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:45.653092 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:45.653127 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:45.733673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:45.733695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:45.733708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.759208 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:45.759243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:48.291278 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:48.302161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:48.302234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:48.326549 1550381 cri.go:89] found id: ""
	I1218 01:52:48.326572 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.326580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:48.326587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:48.326647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:48.355829 1550381 cri.go:89] found id: ""
	I1218 01:52:48.355853 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.355863 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:48.355869 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:48.355927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:48.384367 1550381 cri.go:89] found id: ""
	I1218 01:52:48.384404 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.384414 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:48.384421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:48.384495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:48.440457 1550381 cri.go:89] found id: ""
	I1218 01:52:48.440486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.440495 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:48.440502 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:48.440572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:48.484538 1550381 cri.go:89] found id: ""
	I1218 01:52:48.484565 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.484574 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:48.484580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:48.484671 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:48.517629 1550381 cri.go:89] found id: ""
	I1218 01:52:48.517655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.517664 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:48.517670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:48.517727 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:48.544213 1550381 cri.go:89] found id: ""
	I1218 01:52:48.544250 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.544259 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:48.544268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:48.544338 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:48.571178 1550381 cri.go:89] found id: ""
	I1218 01:52:48.571214 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.571224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:48.571233 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:48.571244 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:48.629108 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:48.629154 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:48.644078 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:48.644105 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:48.710322 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:48.710345 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:48.710357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:48.735873 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:48.735908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:51.264224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:51.274867 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:51.274936 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:51.302544 1550381 cri.go:89] found id: ""
	I1218 01:52:51.302574 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.302582 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:51.302591 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:51.302650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:51.326887 1550381 cri.go:89] found id: ""
	I1218 01:52:51.326920 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.326929 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:51.326935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:51.326996 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:51.355805 1550381 cri.go:89] found id: ""
	I1218 01:52:51.355833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.355842 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:51.355849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:51.355910 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:51.385402 1550381 cri.go:89] found id: ""
	I1218 01:52:51.385475 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.385502 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:51.385516 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:51.385597 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:51.429600 1550381 cri.go:89] found id: ""
	I1218 01:52:51.429679 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.429705 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:51.429723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:51.429795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:51.482295 1550381 cri.go:89] found id: ""
	I1218 01:52:51.482362 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.482386 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:51.482406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:51.482483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:51.509210 1550381 cri.go:89] found id: ""
	I1218 01:52:51.509282 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.509307 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:51.509319 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:51.509392 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:51.534258 1550381 cri.go:89] found id: ""
	I1218 01:52:51.534335 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.534359 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:51.534374 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:51.534399 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:51.590233 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:51.590266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:51.604772 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:51.604807 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:51.669210 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:51.669233 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:51.669245 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:51.694168 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:51.694201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:54.225084 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:54.235834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:54.235909 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:54.263169 1550381 cri.go:89] found id: ""
	I1218 01:52:54.263202 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.263212 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:54.263219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:54.263286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:54.288775 1550381 cri.go:89] found id: ""
	I1218 01:52:54.288801 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.288812 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:54.288818 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:54.288881 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:54.313424 1550381 cri.go:89] found id: ""
	I1218 01:52:54.313455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.313463 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:54.313470 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:54.313545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:54.337557 1550381 cri.go:89] found id: ""
	I1218 01:52:54.337586 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.337595 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:54.337604 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:54.337660 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:54.362944 1550381 cri.go:89] found id: ""
	I1218 01:52:54.362968 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.362976 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:54.362983 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:54.363055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:54.405526 1550381 cri.go:89] found id: ""
	I1218 01:52:54.405546 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.405554 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:54.405560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:54.405617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:54.470952 1550381 cri.go:89] found id: ""
	I1218 01:52:54.470975 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.470983 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:54.470995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:54.471051 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:54.499299 1550381 cri.go:89] found id: ""
	I1218 01:52:54.499324 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.499332 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:54.499341 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:54.499352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:54.554755 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:54.554791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:54.569411 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:54.569439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:54.630717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:54.630737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:54.630751 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:54.656160 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:54.656197 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.184460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:57.195292 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:57.195360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:57.220784 1550381 cri.go:89] found id: ""
	I1218 01:52:57.220821 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.220831 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:57.220837 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:57.220911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:57.245470 1550381 cri.go:89] found id: ""
	I1218 01:52:57.245493 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.245501 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:57.245508 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:57.245572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:57.271053 1550381 cri.go:89] found id: ""
	I1218 01:52:57.271076 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.271084 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:57.271091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:57.271149 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:57.297094 1550381 cri.go:89] found id: ""
	I1218 01:52:57.297117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.297125 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:57.297132 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:57.297189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:57.321869 1550381 cri.go:89] found id: ""
	I1218 01:52:57.321903 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.321913 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:57.321919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:57.321980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:57.346700 1550381 cri.go:89] found id: ""
	I1218 01:52:57.346726 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.346736 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:57.346743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:57.346804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:57.371462 1550381 cri.go:89] found id: ""
	I1218 01:52:57.371487 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.371496 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:57.371503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:57.371561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:57.408706 1550381 cri.go:89] found id: ""
	I1218 01:52:57.408725 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.408733 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:57.408742 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:57.408754 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:57.518131 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:57.518152 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:57.518165 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:57.544836 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:57.544872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.572743 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:57.572782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:57.635526 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:57.635567 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.150459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:00.169757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:00.169839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:00.240442 1550381 cri.go:89] found id: ""
	I1218 01:53:00.240472 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.240482 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:00.240489 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:00.240568 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:00.297137 1550381 cri.go:89] found id: ""
	I1218 01:53:00.297224 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.297243 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:00.297253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:00.297363 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:00.336217 1550381 cri.go:89] found id: ""
	I1218 01:53:00.336242 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.336251 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:00.336259 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:00.336333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:00.365991 1550381 cri.go:89] found id: ""
	I1218 01:53:00.366020 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.366030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:00.366037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:00.366107 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:00.425076 1550381 cri.go:89] found id: ""
	I1218 01:53:00.425152 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.425177 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:00.425198 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:00.425310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:00.464180 1550381 cri.go:89] found id: ""
	I1218 01:53:00.464259 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.464291 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:00.464313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:00.464419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:00.498012 1550381 cri.go:89] found id: ""
	I1218 01:53:00.498088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.498112 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:00.498133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:00.498248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:00.526153 1550381 cri.go:89] found id: ""
	I1218 01:53:00.526228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.526250 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:00.526271 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:00.526313 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:00.581384 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:00.581418 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.596391 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:00.596467 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:00.665518 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:00.665541 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:00.665554 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:00.691014 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:00.691052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:03.221071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:03.232071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:03.232143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:03.256975 1550381 cri.go:89] found id: ""
	I1218 01:53:03.256998 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.257006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:03.257012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:03.257070 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:03.286981 1550381 cri.go:89] found id: ""
	I1218 01:53:03.287006 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.287021 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:03.287028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:03.287089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:03.315833 1550381 cri.go:89] found id: ""
	I1218 01:53:03.315858 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.315867 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:03.315873 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:03.315935 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:03.343588 1550381 cri.go:89] found id: ""
	I1218 01:53:03.343611 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.343619 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:03.343626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:03.343684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:03.369440 1550381 cri.go:89] found id: ""
	I1218 01:53:03.369469 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.369478 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:03.369485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:03.369545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:03.428115 1550381 cri.go:89] found id: ""
	I1218 01:53:03.428138 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.428147 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:03.428154 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:03.428211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:03.484823 1550381 cri.go:89] found id: ""
	I1218 01:53:03.484847 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.484856 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:03.484862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:03.484920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:03.512094 1550381 cri.go:89] found id: ""
	I1218 01:53:03.512119 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.512128 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:03.512139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:03.512150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:03.568376 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:03.568411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:03.583603 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:03.583632 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:03.651107 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:03.651129 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:03.651143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:03.676088 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:03.676125 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.206266 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:06.217464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:06.217558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:06.242745 1550381 cri.go:89] found id: ""
	I1218 01:53:06.242770 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.242779 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:06.242786 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:06.242846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:06.267735 1550381 cri.go:89] found id: ""
	I1218 01:53:06.267757 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.267765 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:06.267771 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:06.267834 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:06.297274 1550381 cri.go:89] found id: ""
	I1218 01:53:06.297297 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.297306 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:06.297313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:06.297372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:06.326794 1550381 cri.go:89] found id: ""
	I1218 01:53:06.326820 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.326829 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:06.326835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:06.326893 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:06.351519 1550381 cri.go:89] found id: ""
	I1218 01:53:06.351543 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.351552 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:06.351558 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:06.351617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:06.378499 1550381 cri.go:89] found id: ""
	I1218 01:53:06.378525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.378534 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:06.378540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:06.378598 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:06.414203 1550381 cri.go:89] found id: ""
	I1218 01:53:06.414236 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.414246 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:06.414252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:06.414316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:06.493089 1550381 cri.go:89] found id: ""
	I1218 01:53:06.493116 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.493125 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:06.493134 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:06.493147 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.522114 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:06.522145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:06.578855 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:06.578891 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:06.594005 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:06.594033 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:06.658779 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:06.658800 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:06.658814 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.183921 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:09.194857 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:09.194928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:09.218740 1550381 cri.go:89] found id: ""
	I1218 01:53:09.218764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.218772 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:09.218778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:09.218835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:09.243853 1550381 cri.go:89] found id: ""
	I1218 01:53:09.243879 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.243888 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:09.243894 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:09.243954 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:09.269591 1550381 cri.go:89] found id: ""
	I1218 01:53:09.269615 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.269624 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:09.269630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:09.269691 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:09.299082 1550381 cri.go:89] found id: ""
	I1218 01:53:09.299120 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.299129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:09.299136 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:09.299207 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:09.324088 1550381 cri.go:89] found id: ""
	I1218 01:53:09.324121 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.324131 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:09.324137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:09.324203 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:09.348898 1550381 cri.go:89] found id: ""
	I1218 01:53:09.348921 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.348930 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:09.348936 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:09.348997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:09.374245 1550381 cri.go:89] found id: ""
	I1218 01:53:09.374268 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.374279 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:09.374286 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:09.374346 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:09.413630 1550381 cri.go:89] found id: ""
	I1218 01:53:09.413653 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.413662 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:09.413672 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:09.413689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:09.474660 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:09.474685 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:09.541382 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:09.541403 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:09.541416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.566761 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:09.566792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:09.593984 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:09.594011 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.149658 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:12.160130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:12.160258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:12.185266 1550381 cri.go:89] found id: ""
	I1218 01:53:12.185339 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.185356 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:12.185363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:12.185434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:12.212092 1550381 cri.go:89] found id: ""
	I1218 01:53:12.212124 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.212133 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:12.212139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:12.212205 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:12.235977 1550381 cri.go:89] found id: ""
	I1218 01:53:12.236009 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.236018 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:12.236024 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:12.236091 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:12.260037 1550381 cri.go:89] found id: ""
	I1218 01:53:12.260069 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.260079 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:12.260085 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:12.260151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:12.285034 1550381 cri.go:89] found id: ""
	I1218 01:53:12.285060 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.285069 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:12.285075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:12.285142 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:12.309185 1550381 cri.go:89] found id: ""
	I1218 01:53:12.309221 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.309231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:12.309256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:12.309330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:12.333588 1550381 cri.go:89] found id: ""
	I1218 01:53:12.333613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.333622 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:12.333629 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:12.333697 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:12.362204 1550381 cri.go:89] found id: ""
	I1218 01:53:12.362228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.362237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:12.362246 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:12.362292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.427192 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:12.431443 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:12.465023 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:12.465048 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:12.534431 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:12.534453 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:12.534465 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:12.560311 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:12.560349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:15.088443 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:15.100075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:15.100170 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:15.126386 1550381 cri.go:89] found id: ""
	I1218 01:53:15.126410 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.126419 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:15.126425 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:15.126493 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:15.152426 1550381 cri.go:89] found id: ""
	I1218 01:53:15.152450 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.152459 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:15.152466 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:15.152529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:15.178155 1550381 cri.go:89] found id: ""
	I1218 01:53:15.178184 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.178193 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:15.178199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:15.178263 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:15.203664 1550381 cri.go:89] found id: ""
	I1218 01:53:15.203687 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.203696 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:15.203703 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:15.203767 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:15.228792 1550381 cri.go:89] found id: ""
	I1218 01:53:15.228815 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.228823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:15.228830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:15.228891 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:15.257550 1550381 cri.go:89] found id: ""
	I1218 01:53:15.257575 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.257585 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:15.257594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:15.257656 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:15.283324 1550381 cri.go:89] found id: ""
	I1218 01:53:15.283350 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.283359 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:15.283365 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:15.283430 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:15.311422 1550381 cri.go:89] found id: ""
	I1218 01:53:15.311455 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.311465 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:15.311474 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:15.311486 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:15.367419 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:15.367456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:15.382340 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:15.382370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:15.500526 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:15.500551 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:15.500563 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:15.527154 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:15.527190 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:18.057588 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:18.068726 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:18.068799 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:18.096722 1550381 cri.go:89] found id: ""
	I1218 01:53:18.096859 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.096895 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:18.096919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:18.097001 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:18.121827 1550381 cri.go:89] found id: ""
	I1218 01:53:18.121851 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.121860 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:18.121866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:18.121932 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:18.146993 1550381 cri.go:89] found id: ""
	I1218 01:53:18.147018 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.147028 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:18.147034 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:18.147094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:18.171236 1550381 cri.go:89] found id: ""
	I1218 01:53:18.171258 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.171266 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:18.171272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:18.171333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:18.199330 1550381 cri.go:89] found id: ""
	I1218 01:53:18.199355 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.199367 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:18.199373 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:18.199432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:18.225625 1550381 cri.go:89] found id: ""
	I1218 01:53:18.225649 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.225659 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:18.225666 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:18.225746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:18.250702 1550381 cri.go:89] found id: ""
	I1218 01:53:18.250725 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.250734 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:18.250741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:18.250854 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:18.276500 1550381 cri.go:89] found id: ""
	I1218 01:53:18.276525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.276534 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:18.276543 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:18.276559 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:18.333753 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:18.333788 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:18.350466 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:18.350520 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:18.431435 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:18.431467 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:18.431480 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:18.463849 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:18.463889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:21.008824 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:21.019970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:21.020040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:21.044583 1550381 cri.go:89] found id: ""
	I1218 01:53:21.044607 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.044616 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:21.044641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:21.044701 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:21.069261 1550381 cri.go:89] found id: ""
	I1218 01:53:21.069286 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.069295 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:21.069301 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:21.069360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:21.099196 1550381 cri.go:89] found id: ""
	I1218 01:53:21.099219 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.099228 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:21.099234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:21.099298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:21.124519 1550381 cri.go:89] found id: ""
	I1218 01:53:21.124541 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.124550 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:21.124556 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:21.124707 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:21.153447 1550381 cri.go:89] found id: ""
	I1218 01:53:21.153474 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.153483 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:21.153503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:21.153561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:21.178670 1550381 cri.go:89] found id: ""
	I1218 01:53:21.178694 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.178702 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:21.178709 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:21.178770 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:21.207919 1550381 cri.go:89] found id: ""
	I1218 01:53:21.207944 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.207953 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:21.207959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:21.208017 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:21.232478 1550381 cri.go:89] found id: ""
	I1218 01:53:21.232503 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.232512 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:21.232521 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:21.232533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:21.287757 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:21.287789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:21.302312 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:21.302349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:21.366377 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:21.366399 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:21.366411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:21.393029 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:21.393110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:23.948667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:23.959340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:23.959436 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:23.986999 1550381 cri.go:89] found id: ""
	I1218 01:53:23.987024 1550381 logs.go:282] 0 containers: []
	W1218 01:53:23.987033 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:23.987040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:23.987103 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:24.020720 1550381 cri.go:89] found id: ""
	I1218 01:53:24.020799 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.020833 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:24.020846 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:24.020920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:24.047235 1550381 cri.go:89] found id: ""
	I1218 01:53:24.047267 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.047283 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:24.047299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:24.047373 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:24.080575 1550381 cri.go:89] found id: ""
	I1218 01:53:24.080599 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.080608 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:24.080615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:24.080706 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:24.105557 1550381 cri.go:89] found id: ""
	I1218 01:53:24.105585 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.105595 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:24.105601 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:24.105661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:24.130738 1550381 cri.go:89] found id: ""
	I1218 01:53:24.130764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.130773 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:24.130779 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:24.130839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:24.159061 1550381 cri.go:89] found id: ""
	I1218 01:53:24.159088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.159097 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:24.159104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:24.159166 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:24.187647 1550381 cri.go:89] found id: ""
	I1218 01:53:24.187674 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.187684 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:24.187694 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:24.187704 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:24.242513 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:24.242544 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:24.257316 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:24.257396 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:24.320000 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:24.320020 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:24.320037 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:24.346099 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:24.346136 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:26.873531 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:26.885238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:26.885314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:26.910216 1550381 cri.go:89] found id: ""
	I1218 01:53:26.910239 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.910247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:26.910253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:26.910313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:26.933448 1550381 cri.go:89] found id: ""
	I1218 01:53:26.933475 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.933484 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:26.933490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:26.933553 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:26.957855 1550381 cri.go:89] found id: ""
	I1218 01:53:26.957888 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.957897 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:26.957904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:26.957979 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:26.982293 1550381 cri.go:89] found id: ""
	I1218 01:53:26.982357 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.982373 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:26.982380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:26.982445 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:27.008361 1550381 cri.go:89] found id: ""
	I1218 01:53:27.008398 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.008408 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:27.008415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:27.008475 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:27.037587 1550381 cri.go:89] found id: ""
	I1218 01:53:27.037613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.037622 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:27.037628 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:27.037686 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:27.065312 1550381 cri.go:89] found id: ""
	I1218 01:53:27.065376 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.065401 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:27.065423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:27.065510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:27.090401 1550381 cri.go:89] found id: ""
	I1218 01:53:27.090427 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.090435 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:27.090445 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:27.090457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:27.105745 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:27.105773 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:27.166883 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:27.166902 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:27.166917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:27.192695 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:27.192732 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:27.224139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:27.224167 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:29.783401 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:29.794627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:29.794738 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:29.819835 1550381 cri.go:89] found id: ""
	I1218 01:53:29.819862 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.819872 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:29.819879 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:29.819939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:29.844881 1550381 cri.go:89] found id: ""
	I1218 01:53:29.844910 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.844919 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:29.844925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:29.844986 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:29.869995 1550381 cri.go:89] found id: ""
	I1218 01:53:29.870023 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.870032 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:29.870038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:29.870100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:29.895647 1550381 cri.go:89] found id: ""
	I1218 01:53:29.895671 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.895681 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:29.895687 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:29.895746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:29.922749 1550381 cri.go:89] found id: ""
	I1218 01:53:29.922773 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.922782 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:29.922788 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:29.922847 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:29.948026 1550381 cri.go:89] found id: ""
	I1218 01:53:29.948052 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.948061 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:29.948071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:29.948129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:29.974575 1550381 cri.go:89] found id: ""
	I1218 01:53:29.974598 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.974607 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:29.974614 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:29.974673 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:30.004723 1550381 cri.go:89] found id: ""
	I1218 01:53:30.004807 1550381 logs.go:282] 0 containers: []
	W1218 01:53:30.004831 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:30.004861 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:30.004908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:30.103939 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:30.103976 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:30.120775 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:30.120815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:30.191673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:30.191695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:30.191707 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:30.218142 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:30.218175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:32.750923 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:32.764019 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:32.764089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:32.789861 1550381 cri.go:89] found id: ""
	I1218 01:53:32.789885 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.789894 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:32.789900 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:32.789967 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:32.821480 1550381 cri.go:89] found id: ""
	I1218 01:53:32.821513 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.821525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:32.821532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:32.821601 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:32.847702 1550381 cri.go:89] found id: ""
	I1218 01:53:32.847733 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.847744 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:32.847751 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:32.847811 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:32.872820 1550381 cri.go:89] found id: ""
	I1218 01:53:32.872845 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.872855 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:32.872861 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:32.872976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:32.901902 1550381 cri.go:89] found id: ""
	I1218 01:53:32.901975 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.902012 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:32.902020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:32.902100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:32.926991 1550381 cri.go:89] found id: ""
	I1218 01:53:32.927016 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.927024 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:32.927031 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:32.927093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:32.951930 1550381 cri.go:89] found id: ""
	I1218 01:53:32.951957 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.951966 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:32.951972 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:32.952034 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:32.977838 1550381 cri.go:89] found id: ""
	I1218 01:53:32.977864 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.977874 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:32.977883 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:32.977894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:33.047486 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:33.047516 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:33.047530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:33.074046 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:33.074084 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:33.106481 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:33.106509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:33.164051 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:33.164095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:35.679393 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:35.706090 1550381 out.go:203] 
	W1218 01:53:35.709129 1550381 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1218 01:53:35.709179 1550381 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1218 01:53:35.709189 1550381 out.go:285] * Related issues:
	W1218 01:53:35.709204 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1218 01:53:35.709225 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1218 01:53:35.712031 1550381 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058634955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058646516Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058675996Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058690896Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058702449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058719162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058734998Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058749521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058766029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058797364Z" level=info msg="Connect containerd service"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.059062129Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.059621443Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.078574656Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.078669144Z" level=info msg="Start recovering state"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.079191052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.079329806Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117026802Z" level=info msg="Start event monitor"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117092737Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117103362Z" level=info msg="Start streaming server"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117113224Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117122127Z" level=info msg="runtime interface starting up..."
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117129035Z" level=info msg="starting plugins..."
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117373017Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:47:32 newest-cni-120615 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.118837196Z" level=info msg="containerd successfully booted in 0.082564s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:45.308112   13841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:45.309853   13841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:45.311524   13841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:45.313266   13841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:45.314195   13841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:53:45 up  8:36,  0 user,  load average: 0.41, 0.55, 1.13
	Linux newest-cni-120615 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:53:41 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:41 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:41 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:42 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:42 newest-cni-120615 kubelet[13685]: E1218 01:53:42.679485   13685 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:42 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:42 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:43 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 18 01:53:43 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:43 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:43 newest-cni-120615 kubelet[13721]: E1218 01:53:43.459723   13721 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:43 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:43 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:44 newest-cni-120615 kubelet[13742]: E1218 01:53:44.225463   13742 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:44 newest-cni-120615 kubelet[13803]: E1218 01:53:44.952852   13803 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:44 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (381.361888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-120615" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-120615
helpers_test.go:244: (dbg) docker inspect newest-cni-120615:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	        "Created": "2025-12-18T01:37:46.267734033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1550552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:47:25.795117457Z",
	            "FinishedAt": "2025-12-18T01:47:24.299442993Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/hosts",
	        "LogPath": "/var/lib/docker/containers/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1/dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1-json.log",
	        "Name": "/newest-cni-120615",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-120615:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-120615",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd9cd12a762d096219196dfe021fbb5403f52996422f82f1ba5a2cd32791f2c1",
	                "LowerDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89a52383830f4a9577a0dce63c4ae4995dc5e6de1a095da2bd8b30c14c271c3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-120615",
	                "Source": "/var/lib/docker/volumes/newest-cni-120615/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-120615",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-120615",
	                "name.minikube.sigs.k8s.io": "newest-cni-120615",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03d6121fa7465afe54c6849e5d9912cbd0edd591438a044dd295828487da20b2",
	            "SandboxKey": "/var/run/docker/netns/03d6121fa746",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-120615": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:76:51:cf:bd:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3561ba231e6c48a625724c6039bb103aabf4482d7db78bad659da0b08d445469",
	                    "EndpointID": "94d026911af52030bc96754a63e0334f51dcbb249930773e615cdc9fb74f4e43",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-120615",
	                        "dd9cd12a762d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (316.916918ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-120615 logs -n 25: (1.554970973s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p embed-certs-922343                                                                                                                                                                                                                                    │ embed-certs-922343           │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ delete  │ -p disable-driver-mounts-618736                                                                                                                                                                                                                          │ disable-driver-mounts-618736 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:35 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:35 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ stop    │ -p default-k8s-diff-port-207500 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:36 UTC │
	│ start   │ -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:36 UTC │ 18 Dec 25 01:37 UTC │
	│ image   │ default-k8s-diff-port-207500 image list --format=json                                                                                                                                                                                                    │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ pause   │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ unpause │ -p default-k8s-diff-port-207500 --alsologtostderr -v=1                                                                                                                                                                                                   │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ delete  │ -p default-k8s-diff-port-207500                                                                                                                                                                                                                          │ default-k8s-diff-port-207500 │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │ 18 Dec 25 01:37 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-970975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:39 UTC │                     │
	│ stop    │ -p no-preload-970975 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ addons  │ enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │ 18 Dec 25 01:41 UTC │
	│ start   │ -p no-preload-970975 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-970975            │ jenkins │ v1.37.0 │ 18 Dec 25 01:41 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-120615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:46 UTC │                     │
	│ stop    │ -p newest-cni-120615 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-120615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │ 18 Dec 25 01:47 UTC │
	│ start   │ -p newest-cni-120615 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:47 UTC │                     │
	│ image   │ newest-cni-120615 image list --format=json                                                                                                                                                                                                               │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:53 UTC │ 18 Dec 25 01:53 UTC │
	│ pause   │ -p newest-cni-120615 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:53 UTC │ 18 Dec 25 01:53 UTC │
	│ unpause │ -p newest-cni-120615 --alsologtostderr -v=1                                                                                                                                                                                                              │ newest-cni-120615            │ jenkins │ v1.37.0 │ 18 Dec 25 01:53 UTC │ 18 Dec 25 01:53 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 01:47:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 01:47:25.355718 1550381 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:47:25.355915 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.355941 1550381 out.go:374] Setting ErrFile to fd 2...
	I1218 01:47:25.355960 1550381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:47:25.356345 1550381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:47:25.356861 1550381 out.go:368] Setting JSON to false
	I1218 01:47:25.358213 1550381 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30592,"bootTime":1765991854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:47:25.358285 1550381 start.go:143] virtualization:  
	I1218 01:47:25.361184 1550381 out.go:179] * [newest-cni-120615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:47:25.364947 1550381 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:47:25.365006 1550381 notify.go:221] Checking for updates...
	I1218 01:47:25.370797 1550381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:47:25.373705 1550381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:25.376399 1550381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:47:25.379145 1550381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:47:25.381925 1550381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1218 01:47:23.895415 1542458 node_ready.go:55] error getting node "no-preload-970975" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-970975": dial tcp 192.168.76.2:8443: connect: connection refused
	I1218 01:47:25.400717 1542458 node_ready.go:38] duration metric: took 6m0.00576723s for node "no-preload-970975" to be "Ready" ...
	I1218 01:47:25.403890 1542458 out.go:203] 
	W1218 01:47:25.406708 1542458 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1218 01:47:25.406730 1542458 out.go:285] * 
	W1218 01:47:25.413144 1542458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 01:47:25.416224 1542458 out.go:203] 
	I1218 01:47:25.385246 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:25.385825 1550381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:47:25.416975 1550381 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:47:25.417132 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.547941 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.531353346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.548100 1550381 docker.go:319] overlay module found
	I1218 01:47:25.551414 1550381 out.go:179] * Using the docker driver based on existing profile
	I1218 01:47:25.554261 1550381 start.go:309] selected driver: docker
	I1218 01:47:25.554288 1550381 start.go:927] validating driver "docker" against &{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.554406 1550381 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:47:25.555118 1550381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:47:25.640875 1550381 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:47:25.630200713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:47:25.641222 1550381 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1218 01:47:25.641258 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:25.641307 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:25.641353 1550381 start.go:353] cluster config:
	{Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:25.647668 1550381 out.go:179] * Starting "newest-cni-120615" primary control-plane node in "newest-cni-120615" cluster
	I1218 01:47:25.650778 1550381 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 01:47:25.654776 1550381 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 01:47:25.657861 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:25.657921 1550381 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
	I1218 01:47:25.657930 1550381 cache.go:65] Caching tarball of preloaded images
	I1218 01:47:25.658010 1550381 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 01:47:25.658022 1550381 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1218 01:47:25.658128 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:25.658345 1550381 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 01:47:25.717764 1550381 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 01:47:25.717789 1550381 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 01:47:25.717804 1550381 cache.go:243] Successfully downloaded all kic artifacts
	I1218 01:47:25.717832 1550381 start.go:360] acquireMachinesLock for newest-cni-120615: {Name:mkf42ef7ea9ee16a9667d77f6c6ed758b0e458ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 01:47:25.717885 1550381 start.go:364] duration metric: took 36.159µs to acquireMachinesLock for "newest-cni-120615"
	I1218 01:47:25.717905 1550381 start.go:96] Skipping create...Using existing machine configuration
	I1218 01:47:25.717910 1550381 fix.go:54] fixHost starting: 
	I1218 01:47:25.718174 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:25.745308 1550381 fix.go:112] recreateIfNeeded on newest-cni-120615: state=Stopped err=<nil>
	W1218 01:47:25.745341 1550381 fix.go:138] unexpected machine state, will restart: <nil>
	I1218 01:47:25.748580 1550381 out.go:252] * Restarting existing docker container for "newest-cni-120615" ...
	I1218 01:47:25.748689 1550381 cli_runner.go:164] Run: docker start newest-cni-120615
	I1218 01:47:26.093744 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:26.142570 1550381 kic.go:430] container "newest-cni-120615" state is running.
	I1218 01:47:26.143025 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:26.185359 1550381 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/config.json ...
	I1218 01:47:26.185574 1550381 machine.go:94] provisionDockerMachine start ...
	I1218 01:47:26.185645 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:26.213286 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:26.213626 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:26.213647 1550381 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 01:47:26.214251 1550381 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51806->127.0.0.1:34217: read: connection reset by peer
	I1218 01:47:29.372266 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.372355 1550381 ubuntu.go:182] provisioning hostname "newest-cni-120615"
	I1218 01:47:29.372452 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.391771 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.392072 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.392083 1550381 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-120615 && echo "newest-cni-120615" | sudo tee /etc/hostname
	I1218 01:47:29.561538 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-120615
	
	I1218 01:47:29.561625 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:29.579579 1550381 main.go:143] libmachine: Using SSH client type: native
	I1218 01:47:29.579890 1550381 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34217 <nil> <nil>}
	I1218 01:47:29.579907 1550381 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120615/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 01:47:29.737159 1550381 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 01:47:29.737184 1550381 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 01:47:29.737219 1550381 ubuntu.go:190] setting up certificates
	I1218 01:47:29.737230 1550381 provision.go:84] configureAuth start
	I1218 01:47:29.737295 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:29.756140 1550381 provision.go:143] copyHostCerts
	I1218 01:47:29.756217 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 01:47:29.756227 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 01:47:29.756310 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 01:47:29.756403 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 01:47:29.756408 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 01:47:29.756436 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 01:47:29.756487 1550381 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 01:47:29.756491 1550381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 01:47:29.756514 1550381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 01:47:29.756559 1550381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120615 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-120615]
	I1218 01:47:30.464419 1550381 provision.go:177] copyRemoteCerts
	I1218 01:47:30.464487 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 01:47:30.464527 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.482395 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.589769 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 01:47:30.608046 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1218 01:47:30.627105 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 01:47:30.645433 1550381 provision.go:87] duration metric: took 908.179647ms to configureAuth
	I1218 01:47:30.645503 1550381 ubuntu.go:206] setting minikube options for container-runtime
	I1218 01:47:30.645738 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:30.645753 1550381 machine.go:97] duration metric: took 4.460171667s to provisionDockerMachine
	I1218 01:47:30.645761 1550381 start.go:293] postStartSetup for "newest-cni-120615" (driver="docker")
	I1218 01:47:30.645773 1550381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 01:47:30.645828 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 01:47:30.645876 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.663527 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.774279 1550381 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 01:47:30.777807 1550381 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 01:47:30.777838 1550381 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 01:47:30.777851 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 01:47:30.777919 1550381 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 01:47:30.778044 1550381 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 01:47:30.778177 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 01:47:30.786077 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:30.804331 1550381 start.go:296] duration metric: took 158.553882ms for postStartSetup
	I1218 01:47:30.804411 1550381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:47:30.804450 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.822410 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.925924 1550381 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 01:47:30.931214 1550381 fix.go:56] duration metric: took 5.213296131s for fixHost
	I1218 01:47:30.931236 1550381 start.go:83] releasing machines lock for "newest-cni-120615", held for 5.213342998s
	I1218 01:47:30.931301 1550381 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-120615
	I1218 01:47:30.952534 1550381 ssh_runner.go:195] Run: cat /version.json
	I1218 01:47:30.952560 1550381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 01:47:30.952584 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.952698 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:30.969636 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:30.973480 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:31.167774 1550381 ssh_runner.go:195] Run: systemctl --version
	I1218 01:47:31.174874 1550381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 01:47:31.179507 1550381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 01:47:31.179587 1550381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 01:47:31.187709 1550381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1218 01:47:31.187739 1550381 start.go:496] detecting cgroup driver to use...
	I1218 01:47:31.187790 1550381 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 01:47:31.187842 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 01:47:31.205437 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 01:47:31.218917 1550381 docker.go:218] disabling cri-docker service (if available) ...
	I1218 01:47:31.218989 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 01:47:31.234859 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 01:47:31.247863 1550381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 01:47:31.361666 1550381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 01:47:31.478401 1550381 docker.go:234] disabling docker service ...
	I1218 01:47:31.478516 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 01:47:31.493181 1550381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 01:47:31.506484 1550381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 01:47:31.622932 1550381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 01:47:31.755398 1550381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 01:47:31.768148 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 01:47:31.786320 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 01:47:31.795518 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 01:47:31.804506 1550381 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 01:47:31.804591 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 01:47:31.814205 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.823037 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 01:47:31.832187 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 01:47:31.841421 1550381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 01:47:31.849663 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 01:47:31.858543 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 01:47:31.867324 1550381 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 01:47:31.878120 1550381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 01:47:31.886565 1550381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 01:47:31.894226 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.000205 1550381 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 01:47:32.119373 1550381 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 01:47:32.119494 1550381 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 01:47:32.123705 1550381 start.go:564] Will wait 60s for crictl version
	I1218 01:47:32.123796 1550381 ssh_runner.go:195] Run: which crictl
	I1218 01:47:32.127736 1550381 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 01:47:32.151646 1550381 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 01:47:32.151742 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.171630 1550381 ssh_runner.go:195] Run: containerd --version
	I1218 01:47:32.197786 1550381 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1218 01:47:32.200756 1550381 cli_runner.go:164] Run: docker network inspect newest-cni-120615 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 01:47:32.216905 1550381 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 01:47:32.220989 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.234255 1550381 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1218 01:47:32.237186 1550381 kubeadm.go:884] updating cluster {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 01:47:32.237352 1550381 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1218 01:47:32.237431 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.266567 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.266592 1550381 containerd.go:534] Images already preloaded, skipping extraction
	I1218 01:47:32.266653 1550381 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 01:47:32.290056 1550381 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 01:47:32.290080 1550381 cache_images.go:86] Images are preloaded, skipping loading
	I1218 01:47:32.290087 1550381 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1218 01:47:32.290202 1550381 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-120615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1218 01:47:32.290272 1550381 ssh_runner.go:195] Run: sudo crictl info
	I1218 01:47:32.317281 1550381 cni.go:84] Creating CNI manager for ""
	I1218 01:47:32.317305 1550381 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 01:47:32.317328 1550381 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1218 01:47:32.317382 1550381 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120615 NodeName:newest-cni-120615 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 01:47:32.317534 1550381 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-120615"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 01:47:32.317611 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1218 01:47:32.325240 1550381 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 01:47:32.325360 1550381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 01:47:32.332953 1550381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1218 01:47:32.345753 1550381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1218 01:47:32.358201 1550381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1218 01:47:32.371135 1550381 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 01:47:32.374910 1550381 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 01:47:32.385004 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:32.524322 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:32.543517 1550381 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615 for IP: 192.168.85.2
	I1218 01:47:32.543581 1550381 certs.go:195] generating shared ca certs ...
	I1218 01:47:32.543620 1550381 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:32.543768 1550381 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 01:47:32.543847 1550381 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 01:47:32.543878 1550381 certs.go:257] generating profile certs ...
	I1218 01:47:32.544012 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/client.key
	I1218 01:47:32.544110 1550381 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key.939be056
	I1218 01:47:32.544194 1550381 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key
	I1218 01:47:32.544363 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 01:47:32.544429 1550381 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 01:47:32.544454 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 01:47:32.544506 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 01:47:32.544561 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 01:47:32.544639 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 01:47:32.544713 1550381 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 01:47:32.545379 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 01:47:32.570494 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 01:47:32.589292 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 01:47:32.607511 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 01:47:32.630085 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1218 01:47:32.648120 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 01:47:32.665293 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 01:47:32.683115 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/newest-cni-120615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 01:47:32.701108 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 01:47:32.719384 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 01:47:32.737332 1550381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 01:47:32.755228 1550381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 01:47:32.768547 1550381 ssh_runner.go:195] Run: openssl version
	I1218 01:47:32.775214 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.783201 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 01:47:32.791100 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794909 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.794975 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 01:47:32.836868 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 01:47:32.844649 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.852089 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 01:47:32.859827 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863774 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.863845 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 01:47:32.904999 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 01:47:32.912518 1550381 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.919928 1550381 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 01:47:32.927254 1550381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.930966 1550381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.931034 1550381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 01:47:32.972378 1550381 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 01:47:32.979895 1550381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 01:47:32.983509 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 01:47:33.024763 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 01:47:33.066928 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 01:47:33.108240 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 01:47:33.150820 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 01:47:33.193721 1550381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 01:47:33.236344 1550381 kubeadm.go:401] StartCluster: {Name:newest-cni-120615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-120615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 01:47:33.236435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 01:47:33.236534 1550381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 01:47:33.262713 1550381 cri.go:89] found id: ""
	I1218 01:47:33.262784 1550381 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 01:47:33.270865 1550381 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1218 01:47:33.270885 1550381 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1218 01:47:33.270962 1550381 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 01:47:33.278569 1550381 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 01:47:33.279133 1550381 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-120615" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.279389 1550381 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-1259289/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-120615" cluster setting kubeconfig missing "newest-cni-120615" context setting]
	I1218 01:47:33.279869 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.281782 1550381 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 01:47:33.289414 1550381 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1218 01:47:33.289446 1550381 kubeadm.go:602] duration metric: took 18.555667ms to restartPrimaryControlPlane
	I1218 01:47:33.289461 1550381 kubeadm.go:403] duration metric: took 53.123465ms to StartCluster
	I1218 01:47:33.289476 1550381 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.289537 1550381 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:47:33.290381 1550381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 01:47:33.290591 1550381 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 01:47:33.290894 1550381 config.go:182] Loaded profile config "newest-cni-120615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:47:33.290942 1550381 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 01:47:33.291049 1550381 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-120615"
	I1218 01:47:33.291069 1550381 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-120615"
	I1218 01:47:33.291087 1550381 addons.go:70] Setting dashboard=true in profile "newest-cni-120615"
	I1218 01:47:33.291142 1550381 addons.go:239] Setting addon dashboard=true in "newest-cni-120615"
	W1218 01:47:33.291166 1550381 addons.go:248] addon dashboard should already be in state true
	I1218 01:47:33.291217 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291092 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.291788 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291956 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.291099 1550381 addons.go:70] Setting default-storageclass=true in profile "newest-cni-120615"
	I1218 01:47:33.292357 1550381 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-120615"
	I1218 01:47:33.292683 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.296441 1550381 out.go:179] * Verifying Kubernetes components...
	I1218 01:47:33.299325 1550381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 01:47:33.332793 1550381 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 01:47:33.338698 1550381 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.338720 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 01:47:33.338786 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.346302 1550381 addons.go:239] Setting addon default-storageclass=true in "newest-cni-120615"
	I1218 01:47:33.346350 1550381 host.go:66] Checking if "newest-cni-120615" exists ...
	I1218 01:47:33.346767 1550381 cli_runner.go:164] Run: docker container inspect newest-cni-120615 --format={{.State.Status}}
	I1218 01:47:33.347220 1550381 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1218 01:47:33.357584 1550381 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1218 01:47:33.364736 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1218 01:47:33.364766 1550381 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1218 01:47:33.364841 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.384388 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.388779 1550381 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.388806 1550381 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 01:47:33.388870 1550381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-120615
	I1218 01:47:33.420777 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.424445 1550381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34217 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/newest-cni-120615/id_rsa Username:docker}
	I1218 01:47:33.506937 1550381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 01:47:33.590614 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:33.623167 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:33.644036 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1218 01:47:33.644058 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1218 01:47:33.686194 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1218 01:47:33.686219 1550381 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1218 01:47:33.699257 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1218 01:47:33.699284 1550381 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1218 01:47:33.712575 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1218 01:47:33.712598 1550381 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1218 01:47:33.726008 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1218 01:47:33.726036 1550381 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1218 01:47:33.739578 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1218 01:47:33.739601 1550381 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1218 01:47:33.752283 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1218 01:47:33.752306 1550381 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1218 01:47:33.765197 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1218 01:47:33.765228 1550381 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1218 01:47:33.778397 1550381 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:33.778463 1550381 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1218 01:47:33.791499 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:34.144394 1550381 api_server.go:52] waiting for apiserver process to appear ...
	I1218 01:47:34.144937 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:34.144564 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145084 1550381 retry.go:31] will retry after 226.399987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144607 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145242 1550381 retry.go:31] will retry after 194.583533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.144818 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.145308 1550381 retry.go:31] will retry after 316.325527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.341084 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:34.371646 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:34.416769 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.416804 1550381 retry.go:31] will retry after 482.49716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.445473 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.445504 1550381 retry.go:31] will retry after 401.349435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.462702 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:34.529683 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.529767 1550381 retry.go:31] will retry after 466.9672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:34.847135 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:34.899725 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:34.915787 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.915821 1550381 retry.go:31] will retry after 680.448009ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:34.980399 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.980428 1550381 retry.go:31] will retry after 371.155762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:34.997728 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:35.075146 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.075188 1550381 retry.go:31] will retry after 528.393444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.145511 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:35.352321 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:35.422768 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.422808 1550381 retry.go:31] will retry after 703.678182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.597254 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:35.604769 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:35.645316 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:35.700025 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.700065 1550381 retry.go:31] will retry after 524.167729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:35.720166 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:35.720199 1550381 retry.go:31] will retry after 843.445988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.127505 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:36.145942 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:36.218437 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.218469 1550381 retry.go:31] will retry after 1.4365249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.224772 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:36.288029 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.288065 1550381 retry.go:31] will retry after 1.092662167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.564433 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:36.628283 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.628318 1550381 retry.go:31] will retry after 821.063441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:36.645614 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.145021 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.381704 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:37.442129 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.442163 1550381 retry.go:31] will retry after 1.066797005s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.450315 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:37.513152 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.513188 1550381 retry.go:31] will retry after 2.094232702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.645565 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:37.656033 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:37.728287 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:37.728341 1550381 retry.go:31] will retry after 2.192570718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.145856 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:38.509851 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:38.574127 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.574163 1550381 retry.go:31] will retry after 2.056176901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:38.645562 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.145843 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:39.608414 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:47:39.645902 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:39.677401 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.677446 1550381 retry.go:31] will retry after 2.219986296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.921684 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:39.986039 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:39.986071 1550381 retry.go:31] will retry after 1.874712757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.145336 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:40.630985 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 01:47:40.645468 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:47:40.721503 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:40.721589 1550381 retry.go:31] will retry after 5.659633915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.145050 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:41.861275 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:47:41.897736 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:41.919445 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.919480 1550381 retry.go:31] will retry after 5.257989291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:47:41.968013 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:41.968047 1550381 retry.go:31] will retry after 2.407225539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:42.145507 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:42.645709 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.145827 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:43.645206 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.145140 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:44.375521 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:44.445301 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.445333 1550381 retry.go:31] will retry after 6.049252935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:44.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.145091 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:45.646076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.145377 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:46.381920 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:46.446240 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.446272 1550381 retry.go:31] will retry after 6.470588043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:46.645629 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.145934 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:47.178013 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:47.241089 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.241122 1550381 retry.go:31] will retry after 8.808880621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:47.645680 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.145730 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:48.646057 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.145645 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:49.646010 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.145037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:50.495265 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:50.557628 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.557662 1550381 retry.go:31] will retry after 5.398438748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:50.645968 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.145305 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:51.645106 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.145818 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.645593 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:52.917095 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:47:53.016010 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.016044 1550381 retry.go:31] will retry after 7.672661981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:53.145281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:53.645853 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.145129 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:54.645151 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.145097 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.645490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:55.957008 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:47:56.023826 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.023863 1550381 retry.go:31] will retry after 8.13600998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.050917 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:47:56.116243 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.116276 1550381 retry.go:31] will retry after 5.600895051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:47:56.145475 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:56.645854 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.145640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:57.645927 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.145109 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:58.645621 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.145858 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:47:59.645893 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.145118 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.645093 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:00.689724 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:00.750450 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:00.750485 1550381 retry.go:31] will retry after 19.327903144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.145862 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.645460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:01.717566 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:01.782999 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:01.783030 1550381 retry.go:31] will retry after 18.603092159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:02.145671 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:02.645087 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.145743 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:03.645040 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.145864 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:04.161047 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:04.272335 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.272373 1550381 retry.go:31] will retry after 12.170847168s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:04.645651 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.145079 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:05.645793 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.145198 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:06.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.145836 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:07.645773 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.145131 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:08.645630 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.145136 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:09.645143 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.145076 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:10.645910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.146089 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:11.645790 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.145142 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:12.645270 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.145485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:13.645137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.145724 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:14.645837 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.146110 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:15.645847 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.145895 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:16.444141 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1218 01:48:16.505161 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.505200 1550381 retry.go:31] will retry after 25.656674631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:16.645612 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.145123 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:17.645762 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.145134 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:18.645126 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.145081 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:19.645152 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.079482 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:20.141746 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.141779 1550381 retry.go:31] will retry after 22.047786735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.145903 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:20.387205 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:48:20.452144 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.452188 1550381 retry.go:31] will retry after 24.810473247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:20.645470 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.146015 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:21.645174 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.145273 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:22.645128 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.145100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:23.645712 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.145139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:24.646075 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.145371 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:25.645387 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.145943 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:26.645074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.145918 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:27.645060 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.145641 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:28.645873 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.146022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:29.645071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.145074 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:30.645956 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.145849 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:31.645447 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.145809 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:32.645085 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.146067 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:33.645142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:33.645253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:33.669719 1550381 cri.go:89] found id: ""
	I1218 01:48:33.669745 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.669754 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:33.669760 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:33.669817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:33.695127 1550381 cri.go:89] found id: ""
	I1218 01:48:33.695150 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.695159 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:33.695164 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:33.695253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:33.719637 1550381 cri.go:89] found id: ""
	I1218 01:48:33.719659 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.719668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:33.719674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:33.719778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:33.746705 1550381 cri.go:89] found id: ""
	I1218 01:48:33.746731 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.746740 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:33.746746 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:33.746805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:33.774595 1550381 cri.go:89] found id: ""
	I1218 01:48:33.774620 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.774631 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:33.774638 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:33.774696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:33.802090 1550381 cri.go:89] found id: ""
	I1218 01:48:33.802115 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.802123 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:33.802130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:33.802187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:33.827047 1550381 cri.go:89] found id: ""
	I1218 01:48:33.827084 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.827094 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:33.827100 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:33.827172 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:33.855186 1550381 cri.go:89] found id: ""
	I1218 01:48:33.855213 1550381 logs.go:282] 0 containers: []
	W1218 01:48:33.855222 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:33.855230 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:33.855241 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:33.910490 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:33.910527 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:33.925321 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:33.925361 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:33.990602 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:33.982192    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.983024    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.984725    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.985196    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:33.986663    1854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:33.990624 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:33.990636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:34.016861 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:34.016901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:36.546620 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:36.557304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:36.557390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:36.582868 1550381 cri.go:89] found id: ""
	I1218 01:48:36.582891 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.582900 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:36.582906 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:36.582964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:36.608045 1550381 cri.go:89] found id: ""
	I1218 01:48:36.608067 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.608075 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:36.608081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:36.608137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:36.633385 1550381 cri.go:89] found id: ""
	I1218 01:48:36.633408 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.633417 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:36.633423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:36.633482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:36.657140 1550381 cri.go:89] found id: ""
	I1218 01:48:36.657165 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.657175 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:36.657187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:36.657254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:36.686651 1550381 cri.go:89] found id: ""
	I1218 01:48:36.686673 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.686683 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:36.686689 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:36.686753 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:36.712049 1550381 cri.go:89] found id: ""
	I1218 01:48:36.712073 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.712082 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:36.712089 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:36.712146 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:36.736327 1550381 cri.go:89] found id: ""
	I1218 01:48:36.736355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.736369 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:36.736375 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:36.736432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:36.763059 1550381 cri.go:89] found id: ""
	I1218 01:48:36.763085 1550381 logs.go:282] 0 containers: []
	W1218 01:48:36.763094 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:36.763104 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:36.763115 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:36.818060 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:36.818095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:36.833161 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:36.833198 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:36.900981 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:36.892245    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.892727    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894448    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.894886    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:36.896574    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:36.901005 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:36.901018 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:36.926395 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:36.926435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:39.461526 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:39.472938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:39.473011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:39.499282 1550381 cri.go:89] found id: ""
	I1218 01:48:39.499309 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.499317 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:39.499324 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:39.499387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:39.524947 1550381 cri.go:89] found id: ""
	I1218 01:48:39.524983 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.524992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:39.524998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:39.525108 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:39.549919 1550381 cri.go:89] found id: ""
	I1218 01:48:39.549944 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.549953 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:39.549959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:39.550021 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:39.574351 1550381 cri.go:89] found id: ""
	I1218 01:48:39.574376 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.574391 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:39.574398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:39.574456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:39.598033 1550381 cri.go:89] found id: ""
	I1218 01:48:39.598054 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.598063 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:39.598069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:39.598133 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:39.626910 1550381 cri.go:89] found id: ""
	I1218 01:48:39.626932 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.626940 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:39.626946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:39.627002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:39.655231 1550381 cri.go:89] found id: ""
	I1218 01:48:39.655302 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.655326 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:39.655346 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:39.655426 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:39.684000 1550381 cri.go:89] found id: ""
	I1218 01:48:39.684079 1550381 logs.go:282] 0 containers: []
	W1218 01:48:39.684106 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:39.684129 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:39.684170 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:39.739075 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:39.739109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:39.753861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:39.753890 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:39.817313 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:39.809803    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.810430    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.811897    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.812206    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:39.813639    2081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:39.817335 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:39.817347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:39.842685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:39.842727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:42.162239 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1218 01:48:42.190324 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:48:42.249384 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:48:42.249527 1550381 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1218 01:48:42.279196 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.279234 1550381 retry.go:31] will retry after 35.148907823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:42.371473 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:42.382637 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:42.382711 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:42.428461 1550381 cri.go:89] found id: ""
	I1218 01:48:42.428490 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.428499 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:42.428505 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:42.428565 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:42.464484 1550381 cri.go:89] found id: ""
	I1218 01:48:42.464511 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.464520 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:42.464526 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:42.464600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:42.501574 1550381 cri.go:89] found id: ""
	I1218 01:48:42.501644 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.501668 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:42.501682 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:42.501756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:42.529255 1550381 cri.go:89] found id: ""
	I1218 01:48:42.529283 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.529292 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:42.529299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:42.529357 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:42.563020 1550381 cri.go:89] found id: ""
	I1218 01:48:42.563093 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.563130 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:42.563153 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:42.563240 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:42.589599 1550381 cri.go:89] found id: ""
	I1218 01:48:42.589672 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.589689 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:42.589697 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:42.589756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:42.620478 1550381 cri.go:89] found id: ""
	I1218 01:48:42.620500 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.620509 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:42.620515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:42.620600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:42.647535 1550381 cri.go:89] found id: ""
	I1218 01:48:42.647560 1550381 logs.go:282] 0 containers: []
	W1218 01:48:42.647574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:42.647583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:42.647594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:42.705328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:42.705366 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:42.720602 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:42.720653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:42.791434 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:42.782564    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.783462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785016    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.785462    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:42.787043    2203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:42.791460 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:42.791474 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:42.816821 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:42.816855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:45.263722 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1218 01:48:45.345805 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 01:48:45.349241 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.349279 1550381 retry.go:31] will retry after 26.611542555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1218 01:48:45.357893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:45.358009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:45.383950 1550381 cri.go:89] found id: ""
	I1218 01:48:45.383977 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.383986 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:45.383993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:45.384055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:45.429969 1550381 cri.go:89] found id: ""
	I1218 01:48:45.429995 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.430004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:45.430010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:45.430071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:45.493689 1550381 cri.go:89] found id: ""
	I1218 01:48:45.493720 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.493730 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:45.493736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:45.493830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:45.520332 1550381 cri.go:89] found id: ""
	I1218 01:48:45.520355 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.520363 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:45.520369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:45.520425 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:45.547181 1550381 cri.go:89] found id: ""
	I1218 01:48:45.547245 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.547270 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:45.547289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:45.547366 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:45.572686 1550381 cri.go:89] found id: ""
	I1218 01:48:45.572754 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.572780 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:45.572804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:45.572879 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:45.596710 1550381 cri.go:89] found id: ""
	I1218 01:48:45.596734 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.596743 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:45.596749 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:45.596809 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:45.622285 1550381 cri.go:89] found id: ""
	I1218 01:48:45.622316 1550381 logs.go:282] 0 containers: []
	W1218 01:48:45.622325 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:45.622335 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:45.622345 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:45.680819 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:45.680854 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:45.695825 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:45.695856 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:45.758598 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:45.749462    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.750187    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.751923    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.752474    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:45.754167    2323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:45.758621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:45.758634 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:45.783476 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:45.783513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:48.311112 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:48.321845 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:48.321917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:48.347239 1550381 cri.go:89] found id: ""
	I1218 01:48:48.347260 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.347269 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:48.347276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:48.347352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:48.372522 1550381 cri.go:89] found id: ""
	I1218 01:48:48.372548 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.372557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:48.372564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:48.372641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:48.419361 1550381 cri.go:89] found id: ""
	I1218 01:48:48.419385 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.419402 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:48.419409 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:48.419476 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:48.468755 1550381 cri.go:89] found id: ""
	I1218 01:48:48.468780 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.468789 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:48.468795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:48.468865 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:48.499951 1550381 cri.go:89] found id: ""
	I1218 01:48:48.499978 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.499987 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:48.499993 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:48.500066 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:48.525758 1550381 cri.go:89] found id: ""
	I1218 01:48:48.525784 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.525793 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:48.525799 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:48.525867 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:48.554959 1550381 cri.go:89] found id: ""
	I1218 01:48:48.554982 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.554991 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:48.554999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:48.555073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:48.579603 1550381 cri.go:89] found id: ""
	I1218 01:48:48.579627 1550381 logs.go:282] 0 containers: []
	W1218 01:48:48.579636 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:48.579646 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:48.579682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:48.638239 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:48.638284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:48.652698 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:48.652747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:48.719758 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:48.711855    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.712379    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.713878    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.714310    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:48.715829    2441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:48.719781 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:48.719796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:48.744911 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:48.744946 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:51.273570 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:51.283902 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:51.283973 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:51.308033 1550381 cri.go:89] found id: ""
	I1218 01:48:51.308057 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.308065 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:51.308072 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:51.308135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:51.335581 1550381 cri.go:89] found id: ""
	I1218 01:48:51.335604 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.335612 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:51.335618 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:51.335676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:51.364109 1550381 cri.go:89] found id: ""
	I1218 01:48:51.364135 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.364144 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:51.364150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:51.364208 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:51.401663 1550381 cri.go:89] found id: ""
	I1218 01:48:51.401689 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.401698 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:51.401704 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:51.401764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:51.436653 1550381 cri.go:89] found id: ""
	I1218 01:48:51.436679 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.436688 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:51.436696 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:51.436755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:51.484873 1550381 cri.go:89] found id: ""
	I1218 01:48:51.484900 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.484908 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:51.484915 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:51.484972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:51.512364 1550381 cri.go:89] found id: ""
	I1218 01:48:51.512389 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.512398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:51.512404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:51.512463 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:51.536334 1550381 cri.go:89] found id: ""
	I1218 01:48:51.536359 1550381 logs.go:282] 0 containers: []
	W1218 01:48:51.536368 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:51.536378 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:51.536389 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:51.590814 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:51.590847 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:51.605410 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:51.605438 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:51.679184 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:51.670350    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.671165    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673030    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.673634    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:51.675286    2555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:51.679247 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:51.679267 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:51.704862 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:51.704898 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:54.232571 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:54.243250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:54.243318 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:54.268694 1550381 cri.go:89] found id: ""
	I1218 01:48:54.268762 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.268776 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:54.268783 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:54.268861 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:54.294766 1550381 cri.go:89] found id: ""
	I1218 01:48:54.294789 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.294798 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:54.294811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:54.294872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:54.319370 1550381 cri.go:89] found id: ""
	I1218 01:48:54.319396 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.319405 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:54.319411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:54.319470 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:54.344762 1550381 cri.go:89] found id: ""
	I1218 01:48:54.344805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.344815 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:54.344839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:54.344928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:54.376778 1550381 cri.go:89] found id: ""
	I1218 01:48:54.376805 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.376823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:54.376830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:54.376948 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:54.435510 1550381 cri.go:89] found id: ""
	I1218 01:48:54.435589 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.435620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:54.435641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:54.435763 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:54.481350 1550381 cri.go:89] found id: ""
	I1218 01:48:54.481428 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.481456 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:54.481476 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:54.481621 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:54.520301 1550381 cri.go:89] found id: ""
	I1218 01:48:54.520377 1550381 logs.go:282] 0 containers: []
	W1218 01:48:54.520399 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:54.520420 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:54.520457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:54.578993 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:54.579045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:54.595845 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:54.595876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:54.661543 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:54.653204    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.654003    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.655599    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.656056    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:54.657576    2666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:48:54.661566 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:54.661578 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:54.687751 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:54.687803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.222271 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:48:57.232723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:48:57.232795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:48:57.260837 1550381 cri.go:89] found id: ""
	I1218 01:48:57.260858 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.260866 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:48:57.260872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:48:57.260928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:48:57.286122 1550381 cri.go:89] found id: ""
	I1218 01:48:57.286148 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.286156 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:48:57.286163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:48:57.286220 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:48:57.310908 1550381 cri.go:89] found id: ""
	I1218 01:48:57.310930 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.310939 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:48:57.310945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:48:57.311005 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:48:57.336552 1550381 cri.go:89] found id: ""
	I1218 01:48:57.336573 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.336583 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:48:57.336589 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:48:57.336681 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:48:57.363069 1550381 cri.go:89] found id: ""
	I1218 01:48:57.363098 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.363106 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:48:57.363113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:48:57.363175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:48:57.387453 1550381 cri.go:89] found id: ""
	I1218 01:48:57.387483 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.387492 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:48:57.387499 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:48:57.387556 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:48:57.455540 1550381 cri.go:89] found id: ""
	I1218 01:48:57.455567 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.455576 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:48:57.455583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:48:57.455641 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:48:57.487729 1550381 cri.go:89] found id: ""
	I1218 01:48:57.487751 1550381 logs.go:282] 0 containers: []
	W1218 01:48:57.487759 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:48:57.487773 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:48:57.487783 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:48:57.513517 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:48:57.513555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:48:57.541522 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:48:57.541591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:48:57.599250 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:48:57.599285 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:48:57.614575 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:48:57.614612 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:48:57.685065 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:48:57.672222    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.672963    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.677651    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.678785    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:48:57.679420    2795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.185435 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:00.217821 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:00.217993 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:00.272675 1550381 cri.go:89] found id: ""
	I1218 01:49:00.272752 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.272781 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:00.272803 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:00.272911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:00.308098 1550381 cri.go:89] found id: ""
	I1218 01:49:00.308130 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.308140 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:00.308148 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:00.308229 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:00.342048 1550381 cri.go:89] found id: ""
	I1218 01:49:00.342083 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.342093 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:00.342102 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:00.342176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:00.373793 1550381 cri.go:89] found id: ""
	I1218 01:49:00.373867 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.373893 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:00.373912 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:00.374032 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:00.453457 1550381 cri.go:89] found id: ""
	I1218 01:49:00.453540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.453562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:00.453580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:00.453674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:00.497069 1550381 cri.go:89] found id: ""
	I1218 01:49:00.497139 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.497165 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:00.497229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:00.497320 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:00.523805 1550381 cri.go:89] found id: ""
	I1218 01:49:00.523883 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.523907 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:00.523925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:00.523998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:00.550245 1550381 cri.go:89] found id: ""
	I1218 01:49:00.550315 1550381 logs.go:282] 0 containers: []
	W1218 01:49:00.550338 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:00.550356 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:00.550368 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:00.606138 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:00.606171 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:00.621471 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:00.621501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:00.687608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:00.679362    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.680138    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.681738    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.682079    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:00.683574    2897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:00.687630 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:00.687645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:00.713254 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:00.713288 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:03.251500 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:03.263863 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:03.263937 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:03.292341 1550381 cri.go:89] found id: ""
	I1218 01:49:03.292363 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.292372 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:03.292379 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:03.292444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:03.318593 1550381 cri.go:89] found id: ""
	I1218 01:49:03.318618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.318627 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:03.318633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:03.318713 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:03.342954 1550381 cri.go:89] found id: ""
	I1218 01:49:03.342976 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.342984 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:03.342990 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:03.343056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:03.369216 1550381 cri.go:89] found id: ""
	I1218 01:49:03.369240 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.369255 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:03.369262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:03.369321 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:03.418160 1550381 cri.go:89] found id: ""
	I1218 01:49:03.418196 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.418208 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:03.418234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:03.418314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:03.468056 1550381 cri.go:89] found id: ""
	I1218 01:49:03.468090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.468100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:03.468107 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:03.468177 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:03.493930 1550381 cri.go:89] found id: ""
	I1218 01:49:03.493954 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.493964 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:03.493970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:03.494028 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:03.522766 1550381 cri.go:89] found id: ""
	I1218 01:49:03.522799 1550381 logs.go:282] 0 containers: []
	W1218 01:49:03.522808 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:03.522817 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:03.522845 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:03.579881 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:03.579922 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:03.595497 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:03.595533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:03.664750 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:03.656141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.656975    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.658591    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.659141    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:03.660853    3006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:03.664774 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:03.664789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:03.690066 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:03.690102 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:06.220404 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:06.230940 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:06.231013 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:06.258449 1550381 cri.go:89] found id: ""
	I1218 01:49:06.258493 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.258501 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:06.258511 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:06.258570 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:06.284944 1550381 cri.go:89] found id: ""
	I1218 01:49:06.284967 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.284975 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:06.284981 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:06.285038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:06.310888 1550381 cri.go:89] found id: ""
	I1218 01:49:06.310914 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.310923 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:06.310929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:06.310992 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:06.336281 1550381 cri.go:89] found id: ""
	I1218 01:49:06.336306 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.336316 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:06.336322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:06.336384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:06.361424 1550381 cri.go:89] found id: ""
	I1218 01:49:06.361489 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.361507 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:06.361515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:06.361581 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:06.386353 1550381 cri.go:89] found id: ""
	I1218 01:49:06.386381 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.386390 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:06.386396 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:06.386458 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:06.420497 1550381 cri.go:89] found id: ""
	I1218 01:49:06.420523 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.420533 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:06.420540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:06.420599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:06.477983 1550381 cri.go:89] found id: ""
	I1218 01:49:06.478008 1550381 logs.go:282] 0 containers: []
	W1218 01:49:06.478017 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:06.478033 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:06.478045 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:06.542941 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:06.542988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:06.557943 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:06.557971 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:06.638974 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:06.630522    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.631109    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.632992    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.633569    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:06.635234    3120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:06.638996 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:06.639008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:06.665193 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:06.665231 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.197687 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:09.208321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:09.208432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:09.233962 1550381 cri.go:89] found id: ""
	I1218 01:49:09.233985 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.233993 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:09.234000 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:09.234061 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:09.262673 1550381 cri.go:89] found id: ""
	I1218 01:49:09.262697 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.262706 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:09.262712 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:09.262773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:09.289951 1550381 cri.go:89] found id: ""
	I1218 01:49:09.289973 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.289982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:09.289988 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:09.290053 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:09.314541 1550381 cri.go:89] found id: ""
	I1218 01:49:09.314570 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.314578 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:09.314585 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:09.314650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:09.343459 1550381 cri.go:89] found id: ""
	I1218 01:49:09.343484 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.343493 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:09.343500 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:09.343563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:09.376389 1550381 cri.go:89] found id: ""
	I1218 01:49:09.376413 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.376422 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:09.376429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:09.376488 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:09.436490 1550381 cri.go:89] found id: ""
	I1218 01:49:09.436567 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.436591 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:09.436611 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:09.436730 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:09.486769 1550381 cri.go:89] found id: ""
	I1218 01:49:09.486798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:09.486807 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:09.486817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:09.486827 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:09.512058 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:09.512099 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:09.540109 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:09.540137 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:09.595196 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:09.595233 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:09.610057 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:09.610088 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:09.676821 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:09.667757    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.668366    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670119    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.670625    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:09.672212    3246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:11.961101 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1218 01:49:12.022946 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:12.023052 1550381 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:12.177224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:12.188868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:12.188946 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:12.214139 1550381 cri.go:89] found id: ""
	I1218 01:49:12.214162 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.214171 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:12.214178 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:12.214264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:12.242355 1550381 cri.go:89] found id: ""
	I1218 01:49:12.242380 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.242389 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:12.242395 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:12.242483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:12.266515 1550381 cri.go:89] found id: ""
	I1218 01:49:12.266540 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.266548 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:12.266555 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:12.266613 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:12.290463 1550381 cri.go:89] found id: ""
	I1218 01:49:12.290529 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.290545 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:12.290553 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:12.290618 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:12.318223 1550381 cri.go:89] found id: ""
	I1218 01:49:12.318247 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.318256 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:12.318262 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:12.318337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:12.342197 1550381 cri.go:89] found id: ""
	I1218 01:49:12.342222 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.342231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:12.342238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:12.342302 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:12.370588 1550381 cri.go:89] found id: ""
	I1218 01:49:12.370611 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.370620 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:12.370626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:12.370688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:12.418224 1550381 cri.go:89] found id: ""
	I1218 01:49:12.418249 1550381 logs.go:282] 0 containers: []
	W1218 01:49:12.418258 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:12.418268 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:12.418279 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:12.523068 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:12.514778    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.515411    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517016    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.517626    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:12.519200    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:12.523095 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:12.523108 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:12.549040 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:12.549076 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:12.577176 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:12.577201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:12.631665 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:12.631703 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.147547 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:15.158736 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:15.158812 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:15.184772 1550381 cri.go:89] found id: ""
	I1218 01:49:15.184838 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.184862 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:15.184881 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:15.184962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:15.210609 1550381 cri.go:89] found id: ""
	I1218 01:49:15.210632 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.210641 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:15.210648 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:15.210712 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:15.238686 1550381 cri.go:89] found id: ""
	I1218 01:49:15.238722 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.238734 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:15.238741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:15.238815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:15.264618 1550381 cri.go:89] found id: ""
	I1218 01:49:15.264675 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.264684 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:15.264692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:15.264757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:15.295205 1550381 cri.go:89] found id: ""
	I1218 01:49:15.295229 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.295244 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:15.295250 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:15.295319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:15.320375 1550381 cri.go:89] found id: ""
	I1218 01:49:15.320398 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.320406 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:15.320412 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:15.320472 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:15.345880 1550381 cri.go:89] found id: ""
	I1218 01:49:15.345912 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.345921 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:15.345928 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:15.345989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:15.371477 1550381 cri.go:89] found id: ""
	I1218 01:49:15.371499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:15.371508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:15.371518 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:15.371530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:15.432289 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:15.432325 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:15.513081 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:15.513118 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:15.528085 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:15.528163 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:15.589922 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:15.582052    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.582656    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584154    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.584659    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:15.586181    3475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:15.589943 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:15.589955 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:17.429823 1550381 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1218 01:49:17.494063 1550381 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1218 01:49:17.494186 1550381 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1218 01:49:17.497997 1550381 out.go:179] * Enabled addons: 
	I1218 01:49:17.500791 1550381 addons.go:530] duration metric: took 1m44.209848117s for enable addons: enabled=[]
	I1218 01:49:18.115485 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:18.126625 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:18.126750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:18.152997 1550381 cri.go:89] found id: ""
	I1218 01:49:18.153031 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.153041 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:18.153048 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:18.153114 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:18.184726 1550381 cri.go:89] found id: ""
	I1218 01:49:18.184748 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.184757 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:18.184764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:18.184833 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:18.213873 1550381 cri.go:89] found id: ""
	I1218 01:49:18.213945 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.213971 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:18.213991 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:18.214081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:18.243010 1550381 cri.go:89] found id: ""
	I1218 01:49:18.243086 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.243109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:18.243128 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:18.243218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:18.267052 1550381 cri.go:89] found id: ""
	I1218 01:49:18.267117 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.267142 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:18.267158 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:18.267246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:18.291939 1550381 cri.go:89] found id: ""
	I1218 01:49:18.292002 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.292026 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:18.292045 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:18.292129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:18.318195 1550381 cri.go:89] found id: ""
	I1218 01:49:18.318219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.318233 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:18.318240 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:18.318299 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:18.346276 1550381 cri.go:89] found id: ""
	I1218 01:49:18.346310 1550381 logs.go:282] 0 containers: []
	W1218 01:49:18.346319 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:18.346329 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:18.346341 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:18.407199 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:18.407257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:18.440997 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:18.441077 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:18.537719 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:18.529288    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.529919    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.531704    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.532082    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:18.533717    3582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:18.537789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:18.537810 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:18.563514 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:18.563550 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:21.091361 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:21.102189 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:21.102289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:21.130931 1550381 cri.go:89] found id: ""
	I1218 01:49:21.130958 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.130967 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:21.130974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:21.131033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:21.155877 1550381 cri.go:89] found id: ""
	I1218 01:49:21.155951 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.155984 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:21.156004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:21.156088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:21.180785 1550381 cri.go:89] found id: ""
	I1218 01:49:21.180809 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.180818 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:21.180824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:21.180908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:21.206344 1550381 cri.go:89] found id: ""
	I1218 01:49:21.206366 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.206375 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:21.206381 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:21.206441 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:21.230752 1550381 cri.go:89] found id: ""
	I1218 01:49:21.230775 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.230783 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:21.230789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:21.230846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:21.255317 1550381 cri.go:89] found id: ""
	I1218 01:49:21.255391 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.255416 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:21.255436 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:21.255520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:21.284319 1550381 cri.go:89] found id: ""
	I1218 01:49:21.284345 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.284355 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:21.284361 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:21.284420 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:21.313090 1550381 cri.go:89] found id: ""
	I1218 01:49:21.313116 1550381 logs.go:282] 0 containers: []
	W1218 01:49:21.313124 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:21.313133 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:21.313143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:21.367961 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:21.367997 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:21.382941 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:21.382972 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:21.496229 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:21.487688    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.488579    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490302    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.490870    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:21.492137    3690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:21.496249 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:21.496261 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:21.526182 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:21.526216 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:24.057294 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:24.070220 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:24.070292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:24.104394 1550381 cri.go:89] found id: ""
	I1218 01:49:24.104419 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.104428 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:24.104434 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:24.104495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:24.129335 1550381 cri.go:89] found id: ""
	I1218 01:49:24.129358 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.129366 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:24.129371 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:24.129429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:24.153339 1550381 cri.go:89] found id: ""
	I1218 01:49:24.153361 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.153370 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:24.153376 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:24.153439 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:24.178645 1550381 cri.go:89] found id: ""
	I1218 01:49:24.178669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.178677 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:24.178684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:24.178742 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:24.202721 1550381 cri.go:89] found id: ""
	I1218 01:49:24.202744 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.202753 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:24.202765 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:24.202827 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:24.228231 1550381 cri.go:89] found id: ""
	I1218 01:49:24.228255 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.228264 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:24.228271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:24.228334 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:24.252564 1550381 cri.go:89] found id: ""
	I1218 01:49:24.252585 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.252593 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:24.252599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:24.252682 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:24.282899 1550381 cri.go:89] found id: ""
	I1218 01:49:24.282975 1550381 logs.go:282] 0 containers: []
	W1218 01:49:24.283000 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:24.283015 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:24.283027 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:24.340471 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:24.340506 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:24.355477 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:24.355511 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:24.448676 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:24.434380    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.435192    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.436820    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441209    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:24.441503    3804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:24.448701 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:24.448720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:24.484800 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:24.484875 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:27.016359 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:27.027204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:27.027276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:27.054358 1550381 cri.go:89] found id: ""
	I1218 01:49:27.054383 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.054392 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:27.054398 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:27.054456 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:27.079191 1550381 cri.go:89] found id: ""
	I1218 01:49:27.079219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.079228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:27.079234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:27.079297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:27.104834 1550381 cri.go:89] found id: ""
	I1218 01:49:27.104856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.104865 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:27.104871 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:27.104943 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:27.134064 1550381 cri.go:89] found id: ""
	I1218 01:49:27.134138 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.134154 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:27.134161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:27.134227 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:27.159891 1550381 cri.go:89] found id: ""
	I1218 01:49:27.159915 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.159925 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:27.159931 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:27.159990 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:27.186008 1550381 cri.go:89] found id: ""
	I1218 01:49:27.186035 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.186044 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:27.186050 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:27.186135 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:27.211311 1550381 cri.go:89] found id: ""
	I1218 01:49:27.211337 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.211346 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:27.211352 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:27.211433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:27.236397 1550381 cri.go:89] found id: ""
	I1218 01:49:27.236431 1550381 logs.go:282] 0 containers: []
	W1218 01:49:27.236440 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:27.236450 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:27.236461 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:27.293966 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:27.294001 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:27.309317 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:27.309355 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:27.380717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:27.372509    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.373162    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374199    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.374687    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:27.376361    3919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:27.380737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:27.380749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:27.410136 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:27.410175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:29.955798 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:29.968674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:29.968788 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:29.996170 1550381 cri.go:89] found id: ""
	I1218 01:49:29.996197 1550381 logs.go:282] 0 containers: []
	W1218 01:49:29.996208 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:29.996214 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:29.996276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:30.036959 1550381 cri.go:89] found id: ""
	I1218 01:49:30.036983 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.036992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:30.036999 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:30.037067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:30.069036 1550381 cri.go:89] found id: ""
	I1218 01:49:30.069065 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.069076 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:30.069092 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:30.069231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:30.098534 1550381 cri.go:89] found id: ""
	I1218 01:49:30.098559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.098568 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:30.098575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:30.098637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:30.127481 1550381 cri.go:89] found id: ""
	I1218 01:49:30.127506 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.127515 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:30.127521 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:30.127588 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:30.153748 1550381 cri.go:89] found id: ""
	I1218 01:49:30.153773 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.153782 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:30.153789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:30.153872 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:30.178887 1550381 cri.go:89] found id: ""
	I1218 01:49:30.178913 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.178922 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:30.178929 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:30.179010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:30.204533 1550381 cri.go:89] found id: ""
	I1218 01:49:30.204559 1550381 logs.go:282] 0 containers: []
	W1218 01:49:30.204568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:30.204578 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:30.204589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:30.260146 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:30.260180 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:30.275037 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:30.275067 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:30.338959 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:30.330794    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.331353    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333075    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.333584    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:30.335039    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:30.338978 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:30.338990 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:30.364082 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:30.364116 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:32.906096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:32.916660 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:32.916731 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:32.940216 1550381 cri.go:89] found id: ""
	I1218 01:49:32.940238 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.940247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:32.940254 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:32.940314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:32.967934 1550381 cri.go:89] found id: ""
	I1218 01:49:32.967956 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.967963 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:32.967970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:32.968027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:32.991930 1550381 cri.go:89] found id: ""
	I1218 01:49:32.991952 1550381 logs.go:282] 0 containers: []
	W1218 01:49:32.991961 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:32.991968 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:32.992027 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:33.018215 1550381 cri.go:89] found id: ""
	I1218 01:49:33.018280 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.018303 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:33.018322 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:33.018416 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:33.046738 1550381 cri.go:89] found id: ""
	I1218 01:49:33.046783 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.046794 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:33.046801 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:33.046873 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:33.072642 1550381 cri.go:89] found id: ""
	I1218 01:49:33.072669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.072678 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:33.072684 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:33.072743 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:33.097687 1550381 cri.go:89] found id: ""
	I1218 01:49:33.097713 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.097722 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:33.097729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:33.097980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:33.125010 1550381 cri.go:89] found id: ""
	I1218 01:49:33.125090 1550381 logs.go:282] 0 containers: []
	W1218 01:49:33.125107 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:33.125118 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:33.125134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:33.139761 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:33.139795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:33.204966 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:33.197038    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.197630    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199169    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.199600    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:33.201028    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:33.204990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:33.205002 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:33.230884 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:33.230929 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:33.263709 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:33.263739 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:35.820022 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:35.830483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:35.830552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:35.855134 1550381 cri.go:89] found id: ""
	I1218 01:49:35.855161 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.855170 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:35.855177 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:35.855239 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:35.881968 1550381 cri.go:89] found id: ""
	I1218 01:49:35.881997 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.882006 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:35.882013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:35.882074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:35.907456 1550381 cri.go:89] found id: ""
	I1218 01:49:35.907481 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.907490 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:35.907496 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:35.907555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:35.936819 1550381 cri.go:89] found id: ""
	I1218 01:49:35.936845 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.936854 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:35.936860 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:35.936939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:35.961081 1550381 cri.go:89] found id: ""
	I1218 01:49:35.961107 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.961116 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:35.961123 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:35.961187 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:35.985065 1550381 cri.go:89] found id: ""
	I1218 01:49:35.985091 1550381 logs.go:282] 0 containers: []
	W1218 01:49:35.985100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:35.985106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:35.985189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:36.013869 1550381 cri.go:89] found id: ""
	I1218 01:49:36.013894 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.013903 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:36.013909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:36.013972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:36.039260 1550381 cri.go:89] found id: ""
	I1218 01:49:36.039283 1550381 logs.go:282] 0 containers: []
	W1218 01:49:36.039291 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:36.039300 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:36.039312 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:36.069571 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:36.069659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:36.126151 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:36.126186 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:36.141484 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:36.141514 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:36.209837 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:36.200737    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.201540    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.202385    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.203307    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:36.204008    4271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:36.209870 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:36.209883 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:38.735237 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:38.746104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:38.746193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:38.772225 1550381 cri.go:89] found id: ""
	I1218 01:49:38.772252 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.772261 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:38.772268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:38.772330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:38.797393 1550381 cri.go:89] found id: ""
	I1218 01:49:38.797420 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.797429 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:38.797435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:38.797498 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:38.822824 1550381 cri.go:89] found id: ""
	I1218 01:49:38.822847 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.822859 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:38.822868 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:38.822927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:38.847877 1550381 cri.go:89] found id: ""
	I1218 01:49:38.847910 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.847919 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:38.847925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:38.847985 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:38.874529 1550381 cri.go:89] found id: ""
	I1218 01:49:38.874555 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.874564 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:38.874570 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:38.874655 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:38.902339 1550381 cri.go:89] found id: ""
	I1218 01:49:38.902406 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.902429 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:38.902447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:38.902535 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:38.927712 1550381 cri.go:89] found id: ""
	I1218 01:49:38.927745 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.927754 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:38.927761 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:38.927830 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:38.954870 1550381 cri.go:89] found id: ""
	I1218 01:49:38.954937 1550381 logs.go:282] 0 containers: []
	W1218 01:49:38.954964 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:38.954986 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:38.955069 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:39.010028 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:39.010080 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:39.025363 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:39.025392 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:39.091129 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:39.080844    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.081674    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.083594    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.084220    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:39.086510    4374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:39.091201 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:39.091221 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:39.116775 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:39.116809 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.650913 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:41.662276 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:41.662344 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:41.731218 1550381 cri.go:89] found id: ""
	I1218 01:49:41.731246 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.731255 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:41.731261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:41.731319 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:41.756567 1550381 cri.go:89] found id: ""
	I1218 01:49:41.756665 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.756680 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:41.756686 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:41.756755 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:41.785421 1550381 cri.go:89] found id: ""
	I1218 01:49:41.785449 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.785458 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:41.785464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:41.785522 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:41.810479 1550381 cri.go:89] found id: ""
	I1218 01:49:41.810501 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.810510 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:41.810524 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:41.810590 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:41.835839 1550381 cri.go:89] found id: ""
	I1218 01:49:41.835863 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.835872 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:41.835878 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:41.835940 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:41.864064 1550381 cri.go:89] found id: ""
	I1218 01:49:41.864092 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.864100 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:41.864106 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:41.864162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:41.889810 1550381 cri.go:89] found id: ""
	I1218 01:49:41.889880 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.889911 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:41.889924 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:41.889997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:41.913756 1550381 cri.go:89] found id: ""
	I1218 01:49:41.913824 1550381 logs.go:282] 0 containers: []
	W1218 01:49:41.913849 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:41.913871 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:41.913902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:41.943258 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:41.943283 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:41.998631 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:41.998673 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:42.016861 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:42.016892 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:42.086550 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:42.077000    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.077668    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.079628    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.080105    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:42.081866    4495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:42.086592 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:42.086609 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.616940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:44.627561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:44.627705 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:44.700300 1550381 cri.go:89] found id: ""
	I1218 01:49:44.700322 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.700331 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:44.700337 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:44.700396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:44.736586 1550381 cri.go:89] found id: ""
	I1218 01:49:44.736669 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.736685 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:44.736693 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:44.736760 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:44.760996 1550381 cri.go:89] found id: ""
	I1218 01:49:44.761020 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.761029 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:44.761035 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:44.761102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:44.786601 1550381 cri.go:89] found id: ""
	I1218 01:49:44.786637 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.786646 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:44.786655 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:44.786723 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:44.812292 1550381 cri.go:89] found id: ""
	I1218 01:49:44.812314 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.812322 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:44.812329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:44.812415 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:44.838185 1550381 cri.go:89] found id: ""
	I1218 01:49:44.838219 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.838229 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:44.838236 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:44.838298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:44.867060 1550381 cri.go:89] found id: ""
	I1218 01:49:44.867081 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.867089 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:44.867095 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:44.867151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:44.892070 1550381 cri.go:89] found id: ""
	I1218 01:49:44.892099 1550381 logs.go:282] 0 containers: []
	W1218 01:49:44.892108 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:44.892117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:44.892133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:44.906549 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:44.906575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:44.971842 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:44.963108    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.963929    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.965509    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.966125    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:44.967743    4594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:44.971863 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:44.971877 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:44.997318 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:44.997352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:45.078604 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:45.078658 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.669132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:47.684661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:47.684728 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:47.724476 1550381 cri.go:89] found id: ""
	I1218 01:49:47.724498 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.724509 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:47.724515 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:47.724576 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:47.758012 1550381 cri.go:89] found id: ""
	I1218 01:49:47.758036 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.758044 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:47.758051 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:47.758109 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:47.786154 1550381 cri.go:89] found id: ""
	I1218 01:49:47.786180 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.786189 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:47.786196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:47.786258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:47.810902 1550381 cri.go:89] found id: ""
	I1218 01:49:47.810928 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.810937 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:47.810944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:47.811003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:47.836006 1550381 cri.go:89] found id: ""
	I1218 01:49:47.836032 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.836040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:47.836049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:47.836119 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:47.861054 1550381 cri.go:89] found id: ""
	I1218 01:49:47.861078 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.861087 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:47.861094 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:47.861167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:47.889731 1550381 cri.go:89] found id: ""
	I1218 01:49:47.889756 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.889765 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:47.889772 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:47.889829 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:47.918028 1550381 cri.go:89] found id: ""
	I1218 01:49:47.918055 1550381 logs.go:282] 0 containers: []
	W1218 01:49:47.918064 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:47.918073 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:47.918090 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:47.972822 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:47.972860 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:47.987701 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:47.987730 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:48.055884 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:48.046737    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.047612    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.048505    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050346    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:48.050922    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:48.055906 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:48.055919 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:48.081983 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:48.082021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.614399 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:50.625532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:50.625607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:50.669636 1550381 cri.go:89] found id: ""
	I1218 01:49:50.669663 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.669672 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:50.669678 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:50.669737 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:50.731793 1550381 cri.go:89] found id: ""
	I1218 01:49:50.731820 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.731829 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:50.731835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:50.731903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:50.758384 1550381 cri.go:89] found id: ""
	I1218 01:49:50.758407 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.758416 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:50.758422 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:50.758481 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:50.783123 1550381 cri.go:89] found id: ""
	I1218 01:49:50.783148 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.783157 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:50.783163 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:50.783224 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:50.807986 1550381 cri.go:89] found id: ""
	I1218 01:49:50.808010 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.808019 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:50.808026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:50.808084 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:50.833014 1550381 cri.go:89] found id: ""
	I1218 01:49:50.833037 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.833058 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:50.833066 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:50.833125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:50.857525 1550381 cri.go:89] found id: ""
	I1218 01:49:50.857551 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.857560 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:50.857567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:50.857631 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:50.882511 1550381 cri.go:89] found id: ""
	I1218 01:49:50.882535 1550381 logs.go:282] 0 containers: []
	W1218 01:49:50.882543 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:50.882552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:50.882565 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:50.916936 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:50.916963 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:50.972064 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:50.972098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:50.987003 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:50.987031 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:51.056796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:51.048453    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.048999    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.050667    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.051216    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:51.052850    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:51.056817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:51.056829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:53.582769 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:53.594237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:53.594316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:53.619778 1550381 cri.go:89] found id: ""
	I1218 01:49:53.619800 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.619809 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:53.619815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:53.619877 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:53.677064 1550381 cri.go:89] found id: ""
	I1218 01:49:53.677087 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.677097 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:53.677103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:53.677179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:53.733772 1550381 cri.go:89] found id: ""
	I1218 01:49:53.733798 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.733808 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:53.733815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:53.733876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:53.759569 1550381 cri.go:89] found id: ""
	I1218 01:49:53.759594 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.759603 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:53.759609 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:53.759667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:53.785969 1550381 cri.go:89] found id: ""
	I1218 01:49:53.785993 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.786002 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:53.786008 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:53.786072 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:53.810819 1550381 cri.go:89] found id: ""
	I1218 01:49:53.810843 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.810851 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:53.810858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:53.810923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:53.836207 1550381 cri.go:89] found id: ""
	I1218 01:49:53.836271 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.836295 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:53.836314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:53.836395 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:53.860468 1550381 cri.go:89] found id: ""
	I1218 01:49:53.860499 1550381 logs.go:282] 0 containers: []
	W1218 01:49:53.860508 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:53.860518 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:53.860537 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:53.917328 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:53.917365 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:53.932367 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:53.932407 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:54.001703 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:53.991890    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.992813    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994440    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.994861    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:53.996408    4937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:54.001723 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:54.001737 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:54.030548 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:54.030584 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.561340 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:56.571927 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:56.571998 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:56.595966 1550381 cri.go:89] found id: ""
	I1218 01:49:56.595996 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.596006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:56.596012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:56.596073 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:56.620113 1550381 cri.go:89] found id: ""
	I1218 01:49:56.620136 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.620145 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:56.620151 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:56.620211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:56.655375 1550381 cri.go:89] found id: ""
	I1218 01:49:56.655401 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.655410 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:56.655417 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:56.655477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:56.711903 1550381 cri.go:89] found id: ""
	I1218 01:49:56.711931 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.711940 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:56.711946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:56.712007 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:56.748501 1550381 cri.go:89] found id: ""
	I1218 01:49:56.748527 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.748536 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:56.748542 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:56.748600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:56.774097 1550381 cri.go:89] found id: ""
	I1218 01:49:56.774121 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.774130 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:56.774137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:56.774196 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:56.802594 1550381 cri.go:89] found id: ""
	I1218 01:49:56.802618 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.802627 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:56.802633 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:56.802690 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:56.827592 1550381 cri.go:89] found id: ""
	I1218 01:49:56.827615 1550381 logs.go:282] 0 containers: []
	W1218 01:49:56.827623 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:56.827633 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:56.827645 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:56.852403 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:56.852433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:49:56.880076 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:56.880109 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:56.935675 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:56.935712 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:56.950522 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:56.950549 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:57.019412 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:57.010556    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.011245    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013030    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.013710    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:57.014875    5059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.521100 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:49:59.531832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:49:59.531908 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:49:59.557309 1550381 cri.go:89] found id: ""
	I1218 01:49:59.557333 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.557342 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:49:59.557349 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:49:59.557406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:49:59.581813 1550381 cri.go:89] found id: ""
	I1218 01:49:59.581889 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.581911 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:49:59.581919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:49:59.581978 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:49:59.605979 1550381 cri.go:89] found id: ""
	I1218 01:49:59.606003 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.606012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:49:59.606018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:49:59.606101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:49:59.631076 1550381 cri.go:89] found id: ""
	I1218 01:49:59.631101 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.631110 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:49:59.631117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:49:59.631210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:49:59.670164 1550381 cri.go:89] found id: ""
	I1218 01:49:59.670189 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.670198 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:49:59.670205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:49:59.670309 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:49:59.706830 1550381 cri.go:89] found id: ""
	I1218 01:49:59.706856 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.706865 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:49:59.706872 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:49:59.706953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:49:59.739787 1550381 cri.go:89] found id: ""
	I1218 01:49:59.739815 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.739824 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:49:59.739830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:49:59.739892 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:49:59.766523 1550381 cri.go:89] found id: ""
	I1218 01:49:59.766548 1550381 logs.go:282] 0 containers: []
	W1218 01:49:59.766558 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:49:59.766568 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:49:59.766579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:49:59.822153 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:49:59.822193 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:49:59.837991 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:49:59.838016 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:49:59.905967 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:49:59.897227    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.897769    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899360    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.899879    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:49:59.901423    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:49:59.905990 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:49:59.906003 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:49:59.931368 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:49:59.931401 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:02.467452 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:02.478157 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:02.478230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:02.504286 1550381 cri.go:89] found id: ""
	I1218 01:50:02.504311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.504321 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:02.504328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:02.504390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:02.530207 1550381 cri.go:89] found id: ""
	I1218 01:50:02.530232 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.530242 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:02.530249 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:02.530308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:02.561278 1550381 cri.go:89] found id: ""
	I1218 01:50:02.561305 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.561314 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:02.561320 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:02.561383 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:02.586119 1550381 cri.go:89] found id: ""
	I1218 01:50:02.586144 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.586153 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:02.586159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:02.586218 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:02.611212 1550381 cri.go:89] found id: ""
	I1218 01:50:02.611239 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.611249 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:02.611256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:02.611317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:02.638670 1550381 cri.go:89] found id: ""
	I1218 01:50:02.638697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.638705 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:02.638715 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:02.638819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:02.699868 1550381 cri.go:89] found id: ""
	I1218 01:50:02.699897 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.699906 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:02.699913 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:02.699971 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:02.753340 1550381 cri.go:89] found id: ""
	I1218 01:50:02.753371 1550381 logs.go:282] 0 containers: []
	W1218 01:50:02.753381 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:02.753391 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:02.753402 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:02.809735 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:02.809769 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:02.825241 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:02.825271 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:02.894096 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:02.885135    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.885712    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887483    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.887956    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:02.889549    5272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:02.894118 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:02.894130 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:02.919985 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:02.920021 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:05.450883 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:05.461914 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:05.461989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:05.487197 1550381 cri.go:89] found id: ""
	I1218 01:50:05.487221 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.487230 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:05.487237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:05.487297 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:05.513273 1550381 cri.go:89] found id: ""
	I1218 01:50:05.513304 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.513313 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:05.513321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:05.513385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:05.544168 1550381 cri.go:89] found id: ""
	I1218 01:50:05.544191 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.544200 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:05.544206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:05.544306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:05.570574 1550381 cri.go:89] found id: ""
	I1218 01:50:05.570597 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.570607 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:05.570613 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:05.570675 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:05.598812 1550381 cri.go:89] found id: ""
	I1218 01:50:05.598837 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.598845 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:05.598852 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:05.598915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:05.628314 1550381 cri.go:89] found id: ""
	I1218 01:50:05.628339 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.628348 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:05.628354 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:05.628418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:05.665714 1550381 cri.go:89] found id: ""
	I1218 01:50:05.665742 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.665751 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:05.665757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:05.665817 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:05.733576 1550381 cri.go:89] found id: ""
	I1218 01:50:05.733603 1550381 logs.go:282] 0 containers: []
	W1218 01:50:05.733624 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:05.733634 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:05.733652 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:05.795404 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:05.795439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:05.811319 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:05.811347 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:05.878494 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:05.869308    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.869863    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.871616    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.872034    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:05.873656    5383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:05.878517 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:05.878532 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:05.904153 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:05.904185 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.433275 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:08.443880 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:08.443983 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:08.468382 1550381 cri.go:89] found id: ""
	I1218 01:50:08.468408 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.468417 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:08.468424 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:08.468483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:08.498576 1550381 cri.go:89] found id: ""
	I1218 01:50:08.498629 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.498656 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:08.498662 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:08.498764 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:08.524767 1550381 cri.go:89] found id: ""
	I1218 01:50:08.524790 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.524799 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:08.524806 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:08.524868 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:08.551353 1550381 cri.go:89] found id: ""
	I1218 01:50:08.551380 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.551399 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:08.551406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:08.551482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:08.577687 1550381 cri.go:89] found id: ""
	I1218 01:50:08.577713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.577722 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:08.577729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:08.577816 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:08.603410 1550381 cri.go:89] found id: ""
	I1218 01:50:08.603434 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.603443 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:08.603450 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:08.603530 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:08.630799 1550381 cri.go:89] found id: ""
	I1218 01:50:08.630824 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.630833 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:08.630840 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:08.630903 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:08.705200 1550381 cri.go:89] found id: ""
	I1218 01:50:08.705228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:08.705237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:08.705247 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:08.705260 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:08.733020 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:08.733047 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:08.798171 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:08.789301    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.790118    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.791840    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.792498    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:08.794100    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:08.798195 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:08.798217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:08.823651 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:08.823682 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:08.851693 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:08.851720 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.407503 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:11.418083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:11.418157 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:11.443131 1550381 cri.go:89] found id: ""
	I1218 01:50:11.443153 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.443161 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:11.443167 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:11.443225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:11.468456 1550381 cri.go:89] found id: ""
	I1218 01:50:11.468480 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.468489 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:11.468495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:11.468559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:11.494875 1550381 cri.go:89] found id: ""
	I1218 01:50:11.494900 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.494910 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:11.494916 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:11.494976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:11.522672 1550381 cri.go:89] found id: ""
	I1218 01:50:11.522695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.522703 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:11.522710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:11.522774 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:11.550689 1550381 cri.go:89] found id: ""
	I1218 01:50:11.550713 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.550723 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:11.550729 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:11.550789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:11.579573 1550381 cri.go:89] found id: ""
	I1218 01:50:11.579600 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.579608 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:11.579615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:11.579677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:11.605240 1550381 cri.go:89] found id: ""
	I1218 01:50:11.605265 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.605274 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:11.605281 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:11.605348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:11.631171 1550381 cri.go:89] found id: ""
	I1218 01:50:11.631198 1550381 logs.go:282] 0 containers: []
	W1218 01:50:11.631208 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:11.631217 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:11.631228 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:11.709937 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:11.709969 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:11.779988 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:11.780023 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:11.795215 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:11.795243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:11.862143 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:11.853713    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.854370    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856115    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.856647    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:11.858165    5622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:11.862165 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:11.862177 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.389878 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:14.400681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:14.400756 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:14.427103 1550381 cri.go:89] found id: ""
	I1218 01:50:14.427127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.427136 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:14.427142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:14.427200 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:14.455157 1550381 cri.go:89] found id: ""
	I1218 01:50:14.455180 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.455189 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:14.455195 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:14.455260 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:14.481712 1550381 cri.go:89] found id: ""
	I1218 01:50:14.481738 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.481752 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:14.481759 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:14.481821 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:14.506286 1550381 cri.go:89] found id: ""
	I1218 01:50:14.506312 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.506320 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:14.506327 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:14.506385 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:14.531764 1550381 cri.go:89] found id: ""
	I1218 01:50:14.531789 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.531797 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:14.531804 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:14.531864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:14.556792 1550381 cri.go:89] found id: ""
	I1218 01:50:14.556817 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.556826 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:14.556832 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:14.556896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:14.581496 1550381 cri.go:89] found id: ""
	I1218 01:50:14.581521 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.581531 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:14.581537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:14.581603 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:14.605950 1550381 cri.go:89] found id: ""
	I1218 01:50:14.605973 1550381 logs.go:282] 0 containers: []
	W1218 01:50:14.605982 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:14.605992 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:14.606007 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:14.631804 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:14.631838 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:14.684967 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:14.685004 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:14.769991 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:14.770039 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:14.785356 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:14.785391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:14.851585 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:14.843391    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.844021    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.845569    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.846097    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:14.847598    5737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.353376 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:17.364408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:17.364479 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:17.389035 1550381 cri.go:89] found id: ""
	I1218 01:50:17.389062 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.389071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:17.389077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:17.389141 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:17.418594 1550381 cri.go:89] found id: ""
	I1218 01:50:17.418620 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.418628 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:17.418634 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:17.418693 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:17.444908 1550381 cri.go:89] found id: ""
	I1218 01:50:17.444930 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.444938 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:17.444945 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:17.445006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:17.470076 1550381 cri.go:89] found id: ""
	I1218 01:50:17.470100 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.470109 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:17.470117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:17.470178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:17.494949 1550381 cri.go:89] found id: ""
	I1218 01:50:17.494972 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.494984 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:17.494992 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:17.495050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:17.523740 1550381 cri.go:89] found id: ""
	I1218 01:50:17.523767 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.523775 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:17.523782 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:17.523840 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:17.551184 1550381 cri.go:89] found id: ""
	I1218 01:50:17.551212 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.551220 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:17.551227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:17.551290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:17.576421 1550381 cri.go:89] found id: ""
	I1218 01:50:17.576446 1550381 logs.go:282] 0 containers: []
	W1218 01:50:17.576454 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:17.576464 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:17.576476 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:17.640879 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:17.631690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.632375    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634161    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.634690    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:17.636185    5825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:17.640898 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:17.640911 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:17.719096 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:17.719184 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:17.749240 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:17.749266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:17.804542 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:17.804581 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.319731 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:20.329891 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:20.329962 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:20.353449 1550381 cri.go:89] found id: ""
	I1218 01:50:20.353471 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.353479 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:20.353485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:20.353542 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:20.378067 1550381 cri.go:89] found id: ""
	I1218 01:50:20.378089 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.378098 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:20.378104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:20.378162 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:20.403262 1550381 cri.go:89] found id: ""
	I1218 01:50:20.403288 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.403297 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:20.403304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:20.403362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:20.430817 1550381 cri.go:89] found id: ""
	I1218 01:50:20.430842 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.430851 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:20.430858 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:20.430916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:20.456026 1550381 cri.go:89] found id: ""
	I1218 01:50:20.456049 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.456057 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:20.456064 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:20.456123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:20.485362 1550381 cri.go:89] found id: ""
	I1218 01:50:20.485388 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.485397 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:20.485404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:20.485461 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:20.509757 1550381 cri.go:89] found id: ""
	I1218 01:50:20.509779 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.509788 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:20.509794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:20.509851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:20.540098 1550381 cri.go:89] found id: ""
	I1218 01:50:20.540122 1550381 logs.go:282] 0 containers: []
	W1218 01:50:20.540130 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:20.540139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:20.540151 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:20.597234 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:20.597269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:20.611800 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:20.611826 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:20.741195 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:20.732954    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.733490    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735078    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.735547    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:20.737340    5941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:20.741222 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:20.741235 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:20.766650 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:20.766689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:23.295459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:23.306363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:23.306450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:23.331822 1550381 cri.go:89] found id: ""
	I1218 01:50:23.331848 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.331857 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:23.331864 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:23.331925 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:23.357194 1550381 cri.go:89] found id: ""
	I1218 01:50:23.357219 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.357228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:23.357234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:23.357293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:23.383201 1550381 cri.go:89] found id: ""
	I1218 01:50:23.383228 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.383238 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:23.383245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:23.383306 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:23.409593 1550381 cri.go:89] found id: ""
	I1218 01:50:23.409619 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.409628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:23.409636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:23.409694 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:23.434134 1550381 cri.go:89] found id: ""
	I1218 01:50:23.434157 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.434167 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:23.434173 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:23.434231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:23.458615 1550381 cri.go:89] found id: ""
	I1218 01:50:23.458637 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.458645 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:23.458652 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:23.458714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:23.483411 1550381 cri.go:89] found id: ""
	I1218 01:50:23.483433 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.483441 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:23.483447 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:23.483505 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:23.510673 1550381 cri.go:89] found id: ""
	I1218 01:50:23.510697 1550381 logs.go:282] 0 containers: []
	W1218 01:50:23.510707 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:23.510716 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:23.510727 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:23.569129 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:23.569169 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:23.583622 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:23.583654 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:23.660608 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:23.639546    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.640211    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.646944    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.648070    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:23.649869    6054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:23.660646 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:23.660659 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:23.689685 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:23.689724 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:26.245910 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:26.256314 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:26.256387 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:26.281224 1550381 cri.go:89] found id: ""
	I1218 01:50:26.281247 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.281257 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:26.281263 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:26.281331 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:26.310540 1550381 cri.go:89] found id: ""
	I1218 01:50:26.310567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.310576 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:26.310583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:26.310642 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:26.336372 1550381 cri.go:89] found id: ""
	I1218 01:50:26.336399 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.336407 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:26.336413 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:26.336473 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:26.362095 1550381 cri.go:89] found id: ""
	I1218 01:50:26.362120 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.362129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:26.362135 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:26.362199 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:26.387399 1550381 cri.go:89] found id: ""
	I1218 01:50:26.387424 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.387433 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:26.387439 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:26.387502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:26.412769 1550381 cri.go:89] found id: ""
	I1218 01:50:26.412794 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.412803 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:26.412809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:26.412878 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:26.437098 1550381 cri.go:89] found id: ""
	I1218 01:50:26.437124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.437132 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:26.437139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:26.437223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:26.462717 1550381 cri.go:89] found id: ""
	I1218 01:50:26.462744 1550381 logs.go:282] 0 containers: []
	W1218 01:50:26.462754 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:26.462764 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:26.462782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:26.521734 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:26.521768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:26.536748 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:26.536777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:26.603709 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:26.594893    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.595492    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597257    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.597791    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:26.599350    6170 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:26.603730 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:26.603749 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:26.632522 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:26.632599 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.191094 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:29.202310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:29.202386 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:29.227851 1550381 cri.go:89] found id: ""
	I1218 01:50:29.227878 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.227887 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:29.227893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:29.227960 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:29.257631 1550381 cri.go:89] found id: ""
	I1218 01:50:29.257656 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.257665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:29.257671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:29.257740 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:29.283590 1550381 cri.go:89] found id: ""
	I1218 01:50:29.283615 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.283625 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:29.283631 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:29.283696 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:29.311410 1550381 cri.go:89] found id: ""
	I1218 01:50:29.311436 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.311445 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:29.311452 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:29.311517 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:29.342669 1550381 cri.go:89] found id: ""
	I1218 01:50:29.342695 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.342714 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:29.342721 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:29.342815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:29.367296 1550381 cri.go:89] found id: ""
	I1218 01:50:29.367321 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.367330 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:29.367336 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:29.367396 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:29.392236 1550381 cri.go:89] found id: ""
	I1218 01:50:29.392260 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.392269 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:29.392275 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:29.392336 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:29.417512 1550381 cri.go:89] found id: ""
	I1218 01:50:29.417538 1550381 logs.go:282] 0 containers: []
	W1218 01:50:29.417547 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:29.417556 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:29.417594 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:29.488248 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:29.479597    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.480325    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482140    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.482810    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:29.484373    6277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:29.488272 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:29.488289 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:29.513850 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:29.513884 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:29.543041 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:29.543071 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:29.602048 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:29.602087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.117433 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:32.128498 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:32.128589 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:32.153547 1550381 cri.go:89] found id: ""
	I1218 01:50:32.153571 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.153580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:32.153587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:32.153647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:32.178431 1550381 cri.go:89] found id: ""
	I1218 01:50:32.178455 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.178464 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:32.178471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:32.178529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:32.203336 1550381 cri.go:89] found id: ""
	I1218 01:50:32.203362 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.203371 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:32.203377 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:32.203434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:32.230677 1550381 cri.go:89] found id: ""
	I1218 01:50:32.230702 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.230712 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:32.230718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:32.230800 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:32.255544 1550381 cri.go:89] found id: ""
	I1218 01:50:32.255567 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.255576 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:32.255583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:32.255661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:32.282405 1550381 cri.go:89] found id: ""
	I1218 01:50:32.282468 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.282486 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:32.282493 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:32.282551 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:32.311100 1550381 cri.go:89] found id: ""
	I1218 01:50:32.311124 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.311133 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:32.311139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:32.311195 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:32.339521 1550381 cri.go:89] found id: ""
	I1218 01:50:32.339550 1550381 logs.go:282] 0 containers: []
	W1218 01:50:32.339559 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:32.339568 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:32.339579 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:32.364381 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:32.364417 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:32.396991 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:32.397017 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:32.453109 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:32.453144 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:32.468129 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:32.468158 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:32.534370 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:32.525120    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.525758    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.527579    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.528149    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:32.529876    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.036282 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:35.048487 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:35.048567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:35.076340 1550381 cri.go:89] found id: ""
	I1218 01:50:35.076365 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.076373 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:35.076386 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:35.076451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:35.104187 1550381 cri.go:89] found id: ""
	I1218 01:50:35.104211 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.104221 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:35.104227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:35.104290 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:35.131465 1550381 cri.go:89] found id: ""
	I1218 01:50:35.131536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.131563 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:35.131583 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:35.131672 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:35.158198 1550381 cri.go:89] found id: ""
	I1218 01:50:35.158264 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.158281 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:35.158289 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:35.158352 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:35.185390 1550381 cri.go:89] found id: ""
	I1218 01:50:35.185462 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.185476 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:35.185483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:35.185555 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:35.215800 1550381 cri.go:89] found id: ""
	I1218 01:50:35.215893 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.215919 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:35.215946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:35.216046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:35.243559 1550381 cri.go:89] found id: ""
	I1218 01:50:35.243627 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.243652 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:35.243671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:35.243748 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:35.272051 1550381 cri.go:89] found id: ""
	I1218 01:50:35.272079 1550381 logs.go:282] 0 containers: []
	W1218 01:50:35.272088 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:35.272099 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:35.272110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:35.328789 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:35.328829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:35.343746 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:35.343791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:35.410255 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:35.400072    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.400453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.402453    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404159    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:35.404848    6517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:35.410278 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:35.410290 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:35.436151 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:35.436194 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:37.964765 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:37.975595 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:37.975668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:38.006140 1550381 cri.go:89] found id: ""
	I1218 01:50:38.006168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.006179 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:38.006186 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:38.006254 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:38.032670 1550381 cri.go:89] found id: ""
	I1218 01:50:38.032696 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.032704 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:38.032711 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:38.032789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:38.058961 1550381 cri.go:89] found id: ""
	I1218 01:50:38.058991 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.059004 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:38.059013 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:38.059086 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:38.093028 1550381 cri.go:89] found id: ""
	I1218 01:50:38.093053 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.093062 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:38.093069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:38.093130 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:38.118000 1550381 cri.go:89] found id: ""
	I1218 01:50:38.118024 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.118033 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:38.118040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:38.118099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:38.143582 1550381 cri.go:89] found id: ""
	I1218 01:50:38.143609 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.143620 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:38.143627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:38.143687 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:38.170663 1550381 cri.go:89] found id: ""
	I1218 01:50:38.170692 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.170701 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:38.170707 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:38.170773 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:38.195587 1550381 cri.go:89] found id: ""
	I1218 01:50:38.195610 1550381 logs.go:282] 0 containers: []
	W1218 01:50:38.195619 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:38.195629 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:38.195640 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:38.250718 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:38.250757 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:38.265740 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:38.265766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:38.332572 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:38.323728    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.324588    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326294    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.326975    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:38.328670    6632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:38.332602 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:38.332653 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:38.358827 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:38.358864 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:40.892874 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:40.912835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:40.912911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:40.974270 1550381 cri.go:89] found id: ""
	I1218 01:50:40.974363 1550381 logs.go:282] 0 containers: []
	W1218 01:50:40.974391 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:40.974427 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:40.974538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:41.009749 1550381 cri.go:89] found id: ""
	I1218 01:50:41.009826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.009862 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:41.009893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:41.009999 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:41.036864 1550381 cri.go:89] found id: ""
	I1218 01:50:41.036933 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.036959 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:41.036974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:41.037050 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:41.062681 1550381 cri.go:89] found id: ""
	I1218 01:50:41.062708 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.062717 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:41.062723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:41.062785 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:41.088510 1550381 cri.go:89] found id: ""
	I1218 01:50:41.088537 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.088562 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:41.088569 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:41.088677 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:41.113288 1550381 cri.go:89] found id: ""
	I1218 01:50:41.113311 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.113321 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:41.113328 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:41.113431 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:41.138413 1550381 cri.go:89] found id: ""
	I1218 01:50:41.138438 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.138447 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:41.138453 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:41.138510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:41.164559 1550381 cri.go:89] found id: ""
	I1218 01:50:41.164592 1550381 logs.go:282] 0 containers: []
	W1218 01:50:41.164601 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:41.164612 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:41.164655 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:41.220220 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:41.220257 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:41.235147 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:41.235175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:41.301835 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:41.291925    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.292729    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.294375    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.295219    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:41.297559    6748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:41.301860 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:41.301873 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:41.327289 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:41.327322 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:43.855149 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:43.865567 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:43.865639 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:43.901178 1550381 cri.go:89] found id: ""
	I1218 01:50:43.901222 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.901231 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:43.901237 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:43.901308 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:43.975051 1550381 cri.go:89] found id: ""
	I1218 01:50:43.975085 1550381 logs.go:282] 0 containers: []
	W1218 01:50:43.975095 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:43.975103 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:43.975175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:44.002012 1550381 cri.go:89] found id: ""
	I1218 01:50:44.002051 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.002062 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:44.002069 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:44.002155 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:44.029977 1550381 cri.go:89] found id: ""
	I1218 01:50:44.030055 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.030090 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:44.030122 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:44.030212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:44.055154 1550381 cri.go:89] found id: ""
	I1218 01:50:44.055182 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.055199 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:44.055206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:44.055264 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:44.080010 1550381 cri.go:89] found id: ""
	I1218 01:50:44.080081 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.080118 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:44.080142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:44.080234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:44.106566 1550381 cri.go:89] found id: ""
	I1218 01:50:44.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.106599 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:44.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:44.106685 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:44.130836 1550381 cri.go:89] found id: ""
	I1218 01:50:44.130864 1550381 logs.go:282] 0 containers: []
	W1218 01:50:44.130873 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:44.130883 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:44.130894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:44.185795 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:44.185833 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:44.200138 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:44.200164 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:44.265688 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:44.257127    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.257682    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259162    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.259663    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:44.261095    6861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:44.265760 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:44.265786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:44.290625 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:44.290662 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:46.817986 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:46.829340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:46.829433 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:46.854080 1550381 cri.go:89] found id: ""
	I1218 01:50:46.854105 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.854113 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:46.854121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:46.854178 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:46.894044 1550381 cri.go:89] found id: ""
	I1218 01:50:46.894069 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.894078 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:46.894084 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:46.894144 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:46.979469 1550381 cri.go:89] found id: ""
	I1218 01:50:46.979536 1550381 logs.go:282] 0 containers: []
	W1218 01:50:46.979561 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:46.979580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:46.979670 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:47.007329 1550381 cri.go:89] found id: ""
	I1218 01:50:47.007393 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.007416 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:47.007435 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:47.007524 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:47.036488 1550381 cri.go:89] found id: ""
	I1218 01:50:47.036515 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.036530 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:47.036537 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:47.036600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:47.061288 1550381 cri.go:89] found id: ""
	I1218 01:50:47.061318 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.061327 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:47.061334 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:47.061394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:47.086889 1550381 cri.go:89] found id: ""
	I1218 01:50:47.086916 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.086925 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:47.086932 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:47.086995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:47.111795 1550381 cri.go:89] found id: ""
	I1218 01:50:47.111826 1550381 logs.go:282] 0 containers: []
	W1218 01:50:47.111835 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:47.111844 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:47.111855 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:47.166527 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:47.166560 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:47.184211 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:47.184238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:47.251953 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:47.243102    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.243996    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.245625    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.246165    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:47.247773    6979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:47.251974 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:47.251986 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:47.277100 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:47.277134 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:49.805362 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:49.816269 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:49.816341 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:49.843797 1550381 cri.go:89] found id: ""
	I1218 01:50:49.843820 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.843828 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:49.843834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:49.843894 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:49.869725 1550381 cri.go:89] found id: ""
	I1218 01:50:49.869751 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.869760 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:49.869766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:49.869826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:49.913079 1550381 cri.go:89] found id: ""
	I1218 01:50:49.913102 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.913110 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:49.913117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:49.913175 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:49.978366 1550381 cri.go:89] found id: ""
	I1218 01:50:49.978456 1550381 logs.go:282] 0 containers: []
	W1218 01:50:49.978481 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:49.978506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:49.978669 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:50.015889 1550381 cri.go:89] found id: ""
	I1218 01:50:50.015961 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.015995 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:50.016015 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:50.016118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:50.043973 1550381 cri.go:89] found id: ""
	I1218 01:50:50.044008 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.044020 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:50.044028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:50.044097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:50.071368 1550381 cri.go:89] found id: ""
	I1218 01:50:50.071397 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.071407 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:50.071415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:50.071492 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:50.100352 1550381 cri.go:89] found id: ""
	I1218 01:50:50.100381 1550381 logs.go:282] 0 containers: []
	W1218 01:50:50.100392 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:50.100402 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:50.100414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:50.157120 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:50.157156 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:50.171935 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:50.171962 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:50.243754 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:50.233187    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.233848    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.235761    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.238144    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:50.239335    7093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:50.243779 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:50.243792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:50.271841 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:50.271895 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:52.801073 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:52.811866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:52.811938 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:52.841370 1550381 cri.go:89] found id: ""
	I1218 01:50:52.841396 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.841404 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:52.841411 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:52.841477 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:52.866527 1550381 cri.go:89] found id: ""
	I1218 01:50:52.866549 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.866557 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:52.866564 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:52.866629 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:52.905295 1550381 cri.go:89] found id: ""
	I1218 01:50:52.905323 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.905333 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:52.905340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:52.905402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:52.976848 1550381 cri.go:89] found id: ""
	I1218 01:50:52.976871 1550381 logs.go:282] 0 containers: []
	W1218 01:50:52.976880 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:52.976886 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:52.976945 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:53.005921 1550381 cri.go:89] found id: ""
	I1218 01:50:53.005996 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.006013 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:53.006021 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:53.006096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:53.035172 1550381 cri.go:89] found id: ""
	I1218 01:50:53.035209 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.035219 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:53.035226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:53.035295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:53.062748 1550381 cri.go:89] found id: ""
	I1218 01:50:53.062816 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.062841 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:53.062856 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:53.062933 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:53.088160 1550381 cri.go:89] found id: ""
	I1218 01:50:53.088194 1550381 logs.go:282] 0 containers: []
	W1218 01:50:53.088203 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:53.088215 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:53.088227 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:53.143868 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:53.143906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:53.159169 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:53.159240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:53.226415 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:53.217507    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.218119    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220118    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.220684    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:53.222463    7207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:53.226438 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:53.226451 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:53.251410 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:53.251448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:55.783464 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:55.793844 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:55.793915 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:55.822511 1550381 cri.go:89] found id: ""
	I1218 01:50:55.822543 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.822552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:55.822559 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:55.822630 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:55.852049 1550381 cri.go:89] found id: ""
	I1218 01:50:55.852076 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.852084 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:55.852090 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:55.852167 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:55.877944 1550381 cri.go:89] found id: ""
	I1218 01:50:55.877974 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.877982 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:55.877989 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:55.878045 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:55.964104 1550381 cri.go:89] found id: ""
	I1218 01:50:55.964127 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.964136 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:55.964142 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:55.964198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:55.989628 1550381 cri.go:89] found id: ""
	I1218 01:50:55.989658 1550381 logs.go:282] 0 containers: []
	W1218 01:50:55.989667 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:55.989681 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:55.989752 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:56.024436 1550381 cri.go:89] found id: ""
	I1218 01:50:56.024465 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.024474 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:56.024480 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:56.024544 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:56.049953 1550381 cri.go:89] found id: ""
	I1218 01:50:56.050028 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.050045 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:56.050053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:56.050118 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:56.075666 1550381 cri.go:89] found id: ""
	I1218 01:50:56.075711 1550381 logs.go:282] 0 containers: []
	W1218 01:50:56.075720 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:56.075729 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:56.075747 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:56.141793 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:56.132794    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.133650    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.135878    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:56.137492    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:56.141818 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:56.141830 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:56.166981 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:56.167013 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:56.193749 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:56.193777 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:50:56.248762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:56.248796 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:58.763667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:50:58.773893 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:50:58.773964 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:50:58.801142 1550381 cri.go:89] found id: ""
	I1218 01:50:58.801168 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.801177 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:50:58.801184 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:50:58.801255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:50:58.826909 1550381 cri.go:89] found id: ""
	I1218 01:50:58.826937 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.826946 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:50:58.826952 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:50:58.827011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:50:58.852298 1550381 cri.go:89] found id: ""
	I1218 01:50:58.852328 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.852337 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:50:58.852343 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:50:58.852402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:50:58.877078 1550381 cri.go:89] found id: ""
	I1218 01:50:58.877103 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.877112 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:50:58.877118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:50:58.877179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:50:58.908546 1550381 cri.go:89] found id: ""
	I1218 01:50:58.908572 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.908582 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:50:58.908588 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:50:58.908665 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:50:58.963294 1550381 cri.go:89] found id: ""
	I1218 01:50:58.963327 1550381 logs.go:282] 0 containers: []
	W1218 01:50:58.963336 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:50:58.963342 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:50:58.963408 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:50:59.004870 1550381 cri.go:89] found id: ""
	I1218 01:50:59.004907 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.004917 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:50:59.004923 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:50:59.004995 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:50:59.030744 1550381 cri.go:89] found id: ""
	I1218 01:50:59.030812 1550381 logs.go:282] 0 containers: []
	W1218 01:50:59.030838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:50:59.030854 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:50:59.030866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:50:59.045546 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:50:59.045575 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:50:59.112855 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:50:59.104235    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.104777    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106469    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.106981    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:50:59.108512    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:50:59.112876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:50:59.112888 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:50:59.137778 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:50:59.137857 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:50:59.165599 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:50:59.165624 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:01.723994 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:01.734966 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:01.735033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:01.759065 1550381 cri.go:89] found id: ""
	I1218 01:51:01.759093 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.759102 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:01.759108 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:01.759169 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:01.787378 1550381 cri.go:89] found id: ""
	I1218 01:51:01.787406 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.787416 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:01.787421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:01.787490 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:01.812815 1550381 cri.go:89] found id: ""
	I1218 01:51:01.812838 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.812847 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:01.812853 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:01.812912 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:01.838955 1550381 cri.go:89] found id: ""
	I1218 01:51:01.838981 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.838990 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:01.839003 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:01.839062 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:01.864230 1550381 cri.go:89] found id: ""
	I1218 01:51:01.864256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.864266 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:01.864273 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:01.864335 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:01.890158 1550381 cri.go:89] found id: ""
	I1218 01:51:01.890184 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.890193 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:01.890199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:01.890259 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:01.955214 1550381 cri.go:89] found id: ""
	I1218 01:51:01.955289 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.955313 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:01.955332 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:01.955421 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:01.997347 1550381 cri.go:89] found id: ""
	I1218 01:51:01.997414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:01.997439 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:01.997457 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:01.997469 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:02.054965 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:02.055055 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:02.074503 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:02.074555 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:02.144467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:02.135994    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.136861    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138510    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.138865    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:02.140404    7543 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:02.144499 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:02.144513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:02.170450 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:02.170493 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:04.704549 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:04.715641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:04.715714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:04.742904 1550381 cri.go:89] found id: ""
	I1218 01:51:04.742928 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.742937 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:04.742943 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:04.743002 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:04.768296 1550381 cri.go:89] found id: ""
	I1218 01:51:04.768323 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.768332 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:04.768338 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:04.768400 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:04.794825 1550381 cri.go:89] found id: ""
	I1218 01:51:04.794859 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.794868 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:04.794888 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:04.794953 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:04.820347 1550381 cri.go:89] found id: ""
	I1218 01:51:04.820375 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.820383 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:04.820390 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:04.820452 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:04.845796 1550381 cri.go:89] found id: ""
	I1218 01:51:04.845823 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.845832 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:04.845839 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:04.845899 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:04.870392 1550381 cri.go:89] found id: ""
	I1218 01:51:04.870418 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.870426 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:04.870433 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:04.870495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:04.918945 1550381 cri.go:89] found id: ""
	I1218 01:51:04.918979 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.918988 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:04.918995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:04.919055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:04.974228 1550381 cri.go:89] found id: ""
	I1218 01:51:04.974255 1550381 logs.go:282] 0 containers: []
	W1218 01:51:04.974264 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:04.974273 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:04.974286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:05.042680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:05.033763    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.034389    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036284    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.036826    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:05.038546    7652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:05.042706 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:05.042719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:05.068392 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:05.068427 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:05.097162 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:05.097199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:05.155869 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:05.155910 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:07.671922 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:07.682619 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:07.682688 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:07.707484 1550381 cri.go:89] found id: ""
	I1218 01:51:07.707512 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.707521 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:07.707528 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:07.707585 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:07.736732 1550381 cri.go:89] found id: ""
	I1218 01:51:07.736765 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.736774 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:07.736781 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:07.736841 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:07.761774 1550381 cri.go:89] found id: ""
	I1218 01:51:07.761800 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.761809 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:07.761815 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:07.761876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:07.790605 1550381 cri.go:89] found id: ""
	I1218 01:51:07.790635 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.790644 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:07.790650 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:07.790714 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:07.816203 1550381 cri.go:89] found id: ""
	I1218 01:51:07.816230 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.816239 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:07.816245 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:07.816304 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:07.841127 1550381 cri.go:89] found id: ""
	I1218 01:51:07.841150 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.841159 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:07.841165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:07.841225 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:07.865946 1550381 cri.go:89] found id: ""
	I1218 01:51:07.866010 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.866036 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:07.866053 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:07.866143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:07.916531 1550381 cri.go:89] found id: ""
	I1218 01:51:07.916559 1550381 logs.go:282] 0 containers: []
	W1218 01:51:07.916568 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:07.916578 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:07.916589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:07.983404 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:07.983433 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:08.038790 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:08.038829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:08.055026 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:08.055100 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:08.121982 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:08.112879    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.113469    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115072    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.115668    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:08.117746    7781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:08.122053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:08.122079 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:10.648476 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:10.659206 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:10.659275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:10.684487 1550381 cri.go:89] found id: ""
	I1218 01:51:10.684516 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.684525 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:10.684532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:10.684594 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:10.709248 1550381 cri.go:89] found id: ""
	I1218 01:51:10.709278 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.709288 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:10.709294 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:10.709354 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:10.733670 1550381 cri.go:89] found id: ""
	I1218 01:51:10.733700 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.733709 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:10.733716 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:10.733776 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:10.762711 1550381 cri.go:89] found id: ""
	I1218 01:51:10.762734 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.762748 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:10.762755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:10.762814 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:10.791896 1550381 cri.go:89] found id: ""
	I1218 01:51:10.791929 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.791938 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:10.791944 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:10.792012 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:10.816916 1550381 cri.go:89] found id: ""
	I1218 01:51:10.816940 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.816951 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:10.816957 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:10.817018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:10.848467 1550381 cri.go:89] found id: ""
	I1218 01:51:10.848533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.848555 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:10.848575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:10.848684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:10.872632 1550381 cri.go:89] found id: ""
	I1218 01:51:10.872694 1550381 logs.go:282] 0 containers: []
	W1218 01:51:10.872710 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:10.872719 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:10.872731 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:10.932049 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:10.932119 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:11.006112 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:11.006150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:11.021573 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:11.021602 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:11.086764 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:11.077377    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.078427    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080067    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.080416    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:11.082029    7897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:11.086785 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:11.086798 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:13.613916 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:13.625018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:13.625093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:13.651186 1550381 cri.go:89] found id: ""
	I1218 01:51:13.651211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.651220 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:13.651226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:13.651289 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:13.680145 1550381 cri.go:89] found id: ""
	I1218 01:51:13.680172 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.680181 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:13.680187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:13.680246 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:13.706941 1550381 cri.go:89] found id: ""
	I1218 01:51:13.706970 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.706980 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:13.706986 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:13.707046 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:13.735536 1550381 cri.go:89] found id: ""
	I1218 01:51:13.735562 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.735571 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:13.735578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:13.735637 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:13.763111 1550381 cri.go:89] found id: ""
	I1218 01:51:13.763185 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.763209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:13.763227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:13.763313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:13.788754 1550381 cri.go:89] found id: ""
	I1218 01:51:13.788779 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.788787 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:13.788794 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:13.788883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:13.813966 1550381 cri.go:89] found id: ""
	I1218 01:51:13.813989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.814004 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:13.814010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:13.814068 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:13.838881 1550381 cri.go:89] found id: ""
	I1218 01:51:13.838907 1550381 logs.go:282] 0 containers: []
	W1218 01:51:13.838915 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:13.838925 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:13.838936 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:13.869225 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:13.869250 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:13.928878 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:13.928917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:13.955609 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:13.955639 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:14.045680 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:14.037393    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.038154    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.039915    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.040305    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:14.041849    8013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:14.045710 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:14.045723 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:16.572096 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:16.582596 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:16.582666 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:16.606933 1550381 cri.go:89] found id: ""
	I1218 01:51:16.606963 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.606972 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:16.606979 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:16.607038 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:16.631960 1550381 cri.go:89] found id: ""
	I1218 01:51:16.631989 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.632004 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:16.632010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:16.632071 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:16.659171 1550381 cri.go:89] found id: ""
	I1218 01:51:16.659198 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.659207 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:16.659213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:16.659269 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:16.689389 1550381 cri.go:89] found id: ""
	I1218 01:51:16.689414 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.689422 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:16.689429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:16.689494 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:16.714209 1550381 cri.go:89] found id: ""
	I1218 01:51:16.714236 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.714246 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:16.714252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:16.714311 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:16.739422 1550381 cri.go:89] found id: ""
	I1218 01:51:16.739450 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.739461 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:16.739467 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:16.739529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:16.765164 1550381 cri.go:89] found id: ""
	I1218 01:51:16.765231 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.765256 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:16.765283 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:16.765372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:16.790914 1550381 cri.go:89] found id: ""
	I1218 01:51:16.790990 1550381 logs.go:282] 0 containers: []
	W1218 01:51:16.791014 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:16.791035 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:16.791063 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:16.848408 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:16.848446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:16.864121 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:16.864199 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:16.967366 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:16.949726    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.950835    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.952284    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.953468    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:16.954281    8107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:16.967436 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:16.967463 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:17.008108 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:17.008145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:19.540127 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:19.550917 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:19.550989 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:19.574864 1550381 cri.go:89] found id: ""
	I1218 01:51:19.574939 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.574964 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:19.574978 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:19.575059 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:19.605362 1550381 cri.go:89] found id: ""
	I1218 01:51:19.605386 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.605395 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:19.605401 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:19.605465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:19.631747 1550381 cri.go:89] found id: ""
	I1218 01:51:19.631774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.631789 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:19.631795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:19.631870 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:19.656716 1550381 cri.go:89] found id: ""
	I1218 01:51:19.656740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.656749 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:19.656755 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:19.656813 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:19.689179 1550381 cri.go:89] found id: ""
	I1218 01:51:19.689206 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.689215 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:19.689221 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:19.689292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:19.713751 1550381 cri.go:89] found id: ""
	I1218 01:51:19.713774 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.713783 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:19.713789 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:19.713846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:19.737993 1550381 cri.go:89] found id: ""
	I1218 01:51:19.738063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.738074 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:19.738081 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:19.738150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:19.763540 1550381 cri.go:89] found id: ""
	I1218 01:51:19.763565 1550381 logs.go:282] 0 containers: []
	W1218 01:51:19.763574 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:19.763583 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:19.763618 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:19.818946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:19.818982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:19.834461 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:19.834487 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:19.932671 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:19.901289    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902181    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.902894    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.904924    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:19.905669    8226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:19.932695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:19.932708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:19.986050 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:19.986085 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:22.530737 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:22.542075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:22.542151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:22.567921 1550381 cri.go:89] found id: ""
	I1218 01:51:22.567945 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.567953 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:22.567960 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:22.568020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:22.595894 1550381 cri.go:89] found id: ""
	I1218 01:51:22.595919 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.595928 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:22.595933 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:22.595991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:22.620929 1550381 cri.go:89] found id: ""
	I1218 01:51:22.620953 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.620968 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:22.620974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:22.621040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:22.646170 1550381 cri.go:89] found id: ""
	I1218 01:51:22.646195 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.646203 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:22.646210 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:22.646270 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:22.675272 1550381 cri.go:89] found id: ""
	I1218 01:51:22.675296 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.675305 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:22.675312 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:22.675376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:22.702994 1550381 cri.go:89] found id: ""
	I1218 01:51:22.703023 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.703033 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:22.703039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:22.703106 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:22.728507 1550381 cri.go:89] found id: ""
	I1218 01:51:22.728533 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.728542 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:22.728548 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:22.728608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:22.754134 1550381 cri.go:89] found id: ""
	I1218 01:51:22.754157 1550381 logs.go:282] 0 containers: []
	W1218 01:51:22.754165 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:22.754175 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:22.754187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:22.810488 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:22.810539 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:22.826174 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:22.826212 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:22.906393 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:22.881838    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.882696    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884418    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.884947    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:22.886679    8340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:22.906431 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:22.906448 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:22.948969 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:22.949025 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:25.504885 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:25.515607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:25.515676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:25.539969 1550381 cri.go:89] found id: ""
	I1218 01:51:25.539994 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.540003 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:25.540010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:25.540076 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:25.565160 1550381 cri.go:89] found id: ""
	I1218 01:51:25.565189 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.565198 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:25.565204 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:25.565262 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:25.593521 1550381 cri.go:89] found id: ""
	I1218 01:51:25.593545 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.593554 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:25.593560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:25.593625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:25.618492 1550381 cri.go:89] found id: ""
	I1218 01:51:25.618523 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.618532 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:25.618538 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:25.618600 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:25.642784 1550381 cri.go:89] found id: ""
	I1218 01:51:25.642810 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.642819 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:25.642825 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:25.642885 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:25.667732 1550381 cri.go:89] found id: ""
	I1218 01:51:25.667759 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.667768 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:25.667778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:25.667843 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:25.695444 1550381 cri.go:89] found id: ""
	I1218 01:51:25.695468 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.695477 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:25.695483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:25.695540 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:25.720467 1550381 cri.go:89] found id: ""
	I1218 01:51:25.720492 1550381 logs.go:282] 0 containers: []
	W1218 01:51:25.720501 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:25.720510 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:25.720522 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:25.777380 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:25.777416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:25.793106 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:25.793135 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:25.859796 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:25.850842    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.851589    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853179    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.853705    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:25.855254    8454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:25.859817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:25.859829 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:25.885375 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:25.885414 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:28.480490 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:28.491517 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:28.491587 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:28.528988 1550381 cri.go:89] found id: ""
	I1218 01:51:28.529011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.529020 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:28.529027 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:28.529088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:28.554389 1550381 cri.go:89] found id: ""
	I1218 01:51:28.554415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.554423 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:28.554429 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:28.554491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:28.595339 1550381 cri.go:89] found id: ""
	I1218 01:51:28.595365 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.595374 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:28.595380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:28.595440 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:28.620349 1550381 cri.go:89] found id: ""
	I1218 01:51:28.620376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.620384 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:28.620391 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:28.620451 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:28.644815 1550381 cri.go:89] found id: ""
	I1218 01:51:28.644844 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.644854 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:28.644862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:28.644923 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:28.669719 1550381 cri.go:89] found id: ""
	I1218 01:51:28.669746 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.669755 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:28.669762 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:28.669822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:28.694390 1550381 cri.go:89] found id: ""
	I1218 01:51:28.694415 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.694424 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:28.694430 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:28.694491 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:28.719213 1550381 cri.go:89] found id: ""
	I1218 01:51:28.719238 1550381 logs.go:282] 0 containers: []
	W1218 01:51:28.719247 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:28.719257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:28.719268 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:28.777972 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:28.778010 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:28.792667 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:28.792698 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:28.863732 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:28.855982    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.856419    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.857906    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.858373    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:28.859886    8570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:28.863755 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:28.863768 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:28.896538 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:28.896571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.484234 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:31.494710 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:31.494781 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:31.519036 1550381 cri.go:89] found id: ""
	I1218 01:51:31.519061 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.519070 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:31.519077 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:31.519136 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:31.543677 1550381 cri.go:89] found id: ""
	I1218 01:51:31.543702 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.543710 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:31.543717 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:31.543778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:31.570267 1550381 cri.go:89] found id: ""
	I1218 01:51:31.570299 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.570308 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:31.570315 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:31.570406 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:31.597988 1550381 cri.go:89] found id: ""
	I1218 01:51:31.598024 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.598034 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:31.598040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:31.598102 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:31.625949 1550381 cri.go:89] found id: ""
	I1218 01:51:31.625983 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.625993 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:31.626014 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:31.626097 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:31.654833 1550381 cri.go:89] found id: ""
	I1218 01:51:31.654898 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.654923 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:31.654937 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:31.655011 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:31.686105 1550381 cri.go:89] found id: ""
	I1218 01:51:31.686132 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.686143 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:31.686149 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:31.686233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:31.711106 1550381 cri.go:89] found id: ""
	I1218 01:51:31.711139 1550381 logs.go:282] 0 containers: []
	W1218 01:51:31.711148 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:31.711158 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:31.711187 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:31.725923 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:31.725952 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:31.789766 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:31.780492    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.781242    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.782882    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.783497    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:31.785156    8682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:31.789789 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:31.789801 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:31.815524 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:31.815558 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:31.843690 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:31.843718 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.403611 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:34.414490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:34.414564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:34.438520 1550381 cri.go:89] found id: ""
	I1218 01:51:34.438544 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.438552 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:34.438562 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:34.438625 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:34.462603 1550381 cri.go:89] found id: ""
	I1218 01:51:34.462627 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.462636 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:34.462642 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:34.462699 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:34.490371 1550381 cri.go:89] found id: ""
	I1218 01:51:34.490395 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.490404 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:34.490410 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:34.490471 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:34.513456 1550381 cri.go:89] found id: ""
	I1218 01:51:34.513480 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.513488 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:34.513495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:34.513562 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:34.537361 1550381 cri.go:89] found id: ""
	I1218 01:51:34.537385 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.537394 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:34.537407 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:34.537468 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:34.561230 1550381 cri.go:89] found id: ""
	I1218 01:51:34.561253 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.561261 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:34.561268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:34.561348 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:34.585180 1550381 cri.go:89] found id: ""
	I1218 01:51:34.585204 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.585212 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:34.585219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:34.585280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:34.609741 1550381 cri.go:89] found id: ""
	I1218 01:51:34.609766 1550381 logs.go:282] 0 containers: []
	W1218 01:51:34.609775 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:34.609785 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:34.609802 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:34.667204 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:34.667238 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:34.682240 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:34.682269 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:34.745795 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:34.737582    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.737983    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739504    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.739826    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:34.741316    8799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:34.745817 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:34.745831 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:34.771222 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:34.771256 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.302139 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:37.313213 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:37.313316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:37.348873 1550381 cri.go:89] found id: ""
	I1218 01:51:37.348895 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.348903 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:37.348909 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:37.348966 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:37.374229 1550381 cri.go:89] found id: ""
	I1218 01:51:37.374256 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.374265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:37.374271 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:37.374332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:37.398897 1550381 cri.go:89] found id: ""
	I1218 01:51:37.398920 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.398928 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:37.398935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:37.398991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:37.422904 1550381 cri.go:89] found id: ""
	I1218 01:51:37.422930 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.422939 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:37.422946 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:37.423010 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:37.451168 1550381 cri.go:89] found id: ""
	I1218 01:51:37.451196 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.451205 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:37.451211 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:37.451273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:37.477986 1550381 cri.go:89] found id: ""
	I1218 01:51:37.478011 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.478021 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:37.478028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:37.478096 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:37.504463 1550381 cri.go:89] found id: ""
	I1218 01:51:37.504487 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.504497 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:37.504503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:37.504563 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:37.529381 1550381 cri.go:89] found id: ""
	I1218 01:51:37.529405 1550381 logs.go:282] 0 containers: []
	W1218 01:51:37.529414 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:37.529423 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:37.529435 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:37.598285 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:37.584791    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.590506    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.591762    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.592298    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:37.593854    8907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:37.598307 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:37.598319 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:37.623017 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:37.623052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:37.654645 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:37.654674 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:37.711304 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:37.711339 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.226741 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:40.238408 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:40.238480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:40.263769 1550381 cri.go:89] found id: ""
	I1218 01:51:40.263795 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.263804 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:40.263810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:40.263896 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:40.289194 1550381 cri.go:89] found id: ""
	I1218 01:51:40.289220 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.289228 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:40.289234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:40.289292 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:40.314040 1550381 cri.go:89] found id: ""
	I1218 01:51:40.314064 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.314073 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:40.314079 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:40.314137 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:40.339145 1550381 cri.go:89] found id: ""
	I1218 01:51:40.339180 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.339189 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:40.339212 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:40.339293 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:40.364902 1550381 cri.go:89] found id: ""
	I1218 01:51:40.364931 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.364940 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:40.364947 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:40.365009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:40.389709 1550381 cri.go:89] found id: ""
	I1218 01:51:40.389730 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.389739 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:40.389745 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:40.389804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:40.414858 1550381 cri.go:89] found id: ""
	I1218 01:51:40.414882 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.414891 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:40.414898 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:40.414958 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:40.441847 1550381 cri.go:89] found id: ""
	I1218 01:51:40.441875 1550381 logs.go:282] 0 containers: []
	W1218 01:51:40.441884 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:40.441893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:40.441906 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:40.456791 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:40.456821 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:40.525853 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:40.518222    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.518768    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520336    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.520859    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:40.521950    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:40.525876 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:40.525889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:40.550993 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:40.551028 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:40.581756 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:40.581786 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.139640 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:43.166426 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:43.166501 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:43.205967 1550381 cri.go:89] found id: ""
	I1218 01:51:43.206046 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.206071 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:43.206091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:43.206223 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:43.234922 1550381 cri.go:89] found id: ""
	I1218 01:51:43.234950 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.234958 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:43.234964 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:43.235023 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:43.261353 1550381 cri.go:89] found id: ""
	I1218 01:51:43.261376 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.261385 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:43.261392 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:43.261482 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:43.286879 1550381 cri.go:89] found id: ""
	I1218 01:51:43.286906 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.286915 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:43.286922 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:43.286982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:43.312530 1550381 cri.go:89] found id: ""
	I1218 01:51:43.312554 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.312568 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:43.312575 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:43.312667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:43.337185 1550381 cri.go:89] found id: ""
	I1218 01:51:43.337207 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.337217 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:43.337223 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:43.337280 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:43.361707 1550381 cri.go:89] found id: ""
	I1218 01:51:43.361731 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.361741 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:43.361747 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:43.361805 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:43.391450 1550381 cri.go:89] found id: ""
	I1218 01:51:43.391483 1550381 logs.go:282] 0 containers: []
	W1218 01:51:43.391492 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:43.391502 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:43.391513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:43.449067 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:43.449104 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:43.464299 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:43.464329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:43.534945 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:43.525741    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.526498    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528182    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.528863    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:43.530697    9141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:43.534968 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:43.534980 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:43.560324 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:43.560357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:46.089618 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:46.100369 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:46.100466 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:46.125679 1550381 cri.go:89] found id: ""
	I1218 01:51:46.125705 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.125714 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:46.125722 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:46.125789 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:46.187262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.187300 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.187310 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:46.187317 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:46.187376 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:46.244106 1550381 cri.go:89] found id: ""
	I1218 01:51:46.244130 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.244139 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:46.244145 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:46.244212 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:46.269674 1550381 cri.go:89] found id: ""
	I1218 01:51:46.269740 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.269769 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:46.269787 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:46.269876 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:46.299177 1550381 cri.go:89] found id: ""
	I1218 01:51:46.299199 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.299209 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:46.299215 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:46.299273 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:46.328469 1550381 cri.go:89] found id: ""
	I1218 01:51:46.328491 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.328499 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:46.328506 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:46.328564 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:46.354262 1550381 cri.go:89] found id: ""
	I1218 01:51:46.354288 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.354297 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:46.354304 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:46.354362 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:46.378724 1550381 cri.go:89] found id: ""
	I1218 01:51:46.378752 1550381 logs.go:282] 0 containers: []
	W1218 01:51:46.378761 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:46.378770 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:46.378781 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:46.433721 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:46.433759 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:46.448259 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:46.448295 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:46.511060 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:46.503056    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.503703    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.504880    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.505441    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:46.507108    9254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:46.511081 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:46.511093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:46.536601 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:46.536803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.070137 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:49.081049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:49.081123 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:49.106438 1550381 cri.go:89] found id: ""
	I1218 01:51:49.106465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.106474 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:49.106483 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:49.106546 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:49.131233 1550381 cri.go:89] found id: ""
	I1218 01:51:49.131257 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.131265 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:49.131272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:49.131337 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:49.194204 1550381 cri.go:89] found id: ""
	I1218 01:51:49.194233 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.194242 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:49.194248 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:49.194310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:49.244013 1550381 cri.go:89] found id: ""
	I1218 01:51:49.244039 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.244048 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:49.244054 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:49.244120 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:49.271185 1550381 cri.go:89] found id: ""
	I1218 01:51:49.271211 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.271219 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:49.271226 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:49.271288 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:49.298143 1550381 cri.go:89] found id: ""
	I1218 01:51:49.298170 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.298180 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:49.298187 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:49.298251 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:49.324346 1550381 cri.go:89] found id: ""
	I1218 01:51:49.324374 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.324383 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:49.324389 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:49.324450 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:49.350033 1550381 cri.go:89] found id: ""
	I1218 01:51:49.350063 1550381 logs.go:282] 0 containers: []
	W1218 01:51:49.350072 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:49.350081 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:49.350094 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:49.382558 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:49.382589 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:49.438756 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:49.438795 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:49.453736 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:49.453765 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:49.515649 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:49.506698    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.507341    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.508268    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.509832    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:49.510129    9379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:49.515672 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:49.515684 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:52.041321 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:52.052329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:52.052403 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:52.082403 1550381 cri.go:89] found id: ""
	I1218 01:51:52.082434 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.082444 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:52.082451 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:52.082513 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:52.108691 1550381 cri.go:89] found id: ""
	I1218 01:51:52.108720 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.108729 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:52.108735 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:52.108795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:52.138279 1550381 cri.go:89] found id: ""
	I1218 01:51:52.138314 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.138323 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:52.138329 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:52.138393 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:52.207039 1550381 cri.go:89] found id: ""
	I1218 01:51:52.207067 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.207076 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:52.207083 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:52.207150 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:52.236007 1550381 cri.go:89] found id: ""
	I1218 01:51:52.236042 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.236052 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:52.236059 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:52.236125 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:52.267547 1550381 cri.go:89] found id: ""
	I1218 01:51:52.267583 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.267593 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:52.267599 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:52.267668 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:52.295275 1550381 cri.go:89] found id: ""
	I1218 01:51:52.295310 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.295320 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:52.295326 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:52.295407 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:52.324187 1550381 cri.go:89] found id: ""
	I1218 01:51:52.324215 1550381 logs.go:282] 0 containers: []
	W1218 01:51:52.324224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:52.324234 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:52.324246 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:52.352151 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:52.352182 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:52.408412 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:52.408446 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:52.423024 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:52.423098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:52.488577 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:52.479672    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.480321    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.481877    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.482453    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:52.484212    9494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:52.488599 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:52.488613 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.015396 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:55.026777 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:55.026851 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:55.052687 1550381 cri.go:89] found id: ""
	I1218 01:51:55.052713 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.052722 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:55.052728 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:55.052786 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:55.082492 1550381 cri.go:89] found id: ""
	I1218 01:51:55.082515 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.082524 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:55.082531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:55.082592 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:55.107565 1550381 cri.go:89] found id: ""
	I1218 01:51:55.107592 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.107600 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:55.107607 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:55.107674 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:55.135213 1550381 cri.go:89] found id: ""
	I1218 01:51:55.135241 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.135249 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:55.135270 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:55.135332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:55.177099 1550381 cri.go:89] found id: ""
	I1218 01:51:55.177128 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.177137 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:55.177143 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:55.177210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:55.224917 1550381 cri.go:89] found id: ""
	I1218 01:51:55.224946 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.224954 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:55.224961 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:55.225020 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:55.252438 1550381 cri.go:89] found id: ""
	I1218 01:51:55.252465 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.252473 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:55.252479 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:55.252538 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:55.277054 1550381 cri.go:89] found id: ""
	I1218 01:51:55.277074 1550381 logs.go:282] 0 containers: []
	W1218 01:51:55.277082 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:55.277091 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:55.277106 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:55.292214 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:55.292240 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:55.354379 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:55.346236    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.346747    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348217    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.348649    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:55.350094    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:55.354401 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:55.354412 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:55.379112 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:55.379143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:51:55.407257 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:55.407284 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:57.964281 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:51:57.975020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:51:57.975088 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:51:58.005630 1550381 cri.go:89] found id: ""
	I1218 01:51:58.005658 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.005667 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:51:58.005674 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:51:58.005745 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:51:58.032296 1550381 cri.go:89] found id: ""
	I1218 01:51:58.032319 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.032329 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:51:58.032335 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:51:58.032402 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:51:58.061454 1550381 cri.go:89] found id: ""
	I1218 01:51:58.061479 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.061488 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:51:58.061495 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:51:58.061554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:51:58.087783 1550381 cri.go:89] found id: ""
	I1218 01:51:58.087808 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.087817 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:51:58.087824 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:51:58.087884 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:51:58.115473 1550381 cri.go:89] found id: ""
	I1218 01:51:58.115496 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.115505 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:51:58.115512 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:51:58.115599 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:51:58.152731 1550381 cri.go:89] found id: ""
	I1218 01:51:58.152757 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.152766 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:51:58.152773 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:51:58.152832 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:51:58.207262 1550381 cri.go:89] found id: ""
	I1218 01:51:58.207284 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.207302 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:51:58.207310 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:51:58.207367 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:51:58.244074 1550381 cri.go:89] found id: ""
	I1218 01:51:58.244103 1550381 logs.go:282] 0 containers: []
	W1218 01:51:58.244112 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:51:58.244121 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:51:58.244133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:51:58.305417 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:51:58.305455 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:51:58.320298 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:51:58.320326 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:51:58.392177 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:51:58.383564    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.384410    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386085    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.386657    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:51:58.388186    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:51:58.392200 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:51:58.392215 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:51:58.418264 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:51:58.418299 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:00.947037 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:00.958414 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:00.958504 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:00.982432 1550381 cri.go:89] found id: ""
	I1218 01:52:00.982456 1550381 logs.go:282] 0 containers: []
	W1218 01:52:00.982465 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:00.982472 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:00.982554 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:01.011620 1550381 cri.go:89] found id: ""
	I1218 01:52:01.011645 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.011654 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:01.011661 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:01.011721 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:01.038538 1550381 cri.go:89] found id: ""
	I1218 01:52:01.038564 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.038572 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:01.038578 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:01.038636 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:01.062732 1550381 cri.go:89] found id: ""
	I1218 01:52:01.062758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.062768 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:01.062775 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:01.062836 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:01.088130 1550381 cri.go:89] found id: ""
	I1218 01:52:01.088156 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.088165 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:01.088172 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:01.088241 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:01.116412 1550381 cri.go:89] found id: ""
	I1218 01:52:01.116440 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.116450 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:01.116471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:01.116532 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:01.157710 1550381 cri.go:89] found id: ""
	I1218 01:52:01.157737 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.157747 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:01.157754 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:01.157815 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:01.207757 1550381 cri.go:89] found id: ""
	I1218 01:52:01.207784 1550381 logs.go:282] 0 containers: []
	W1218 01:52:01.207794 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:01.207803 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:01.207815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:01.293467 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:01.293515 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:01.308790 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:01.308825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:01.377467 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:01.369257    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370071    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.370750    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.372362    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:01.373023    9820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:01.377487 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:01.377501 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:01.403688 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:01.403722 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:03.936540 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:03.947485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:03.947559 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:03.972917 1550381 cri.go:89] found id: ""
	I1218 01:52:03.972939 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.972947 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:03.972953 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:03.973018 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:03.997960 1550381 cri.go:89] found id: ""
	I1218 01:52:03.997983 1550381 logs.go:282] 0 containers: []
	W1218 01:52:03.997992 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:03.997998 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:03.998056 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:04.027683 1550381 cri.go:89] found id: ""
	I1218 01:52:04.027754 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.027780 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:04.027808 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:04.027916 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:04.054769 1550381 cri.go:89] found id: ""
	I1218 01:52:04.054833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.054843 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:04.054849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:04.054917 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:04.081260 1550381 cri.go:89] found id: ""
	I1218 01:52:04.081284 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.081293 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:04.081299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:04.081372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:04.106563 1550381 cri.go:89] found id: ""
	I1218 01:52:04.106590 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.106599 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:04.106606 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:04.106667 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:04.131682 1550381 cri.go:89] found id: ""
	I1218 01:52:04.131708 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.131717 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:04.131724 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:04.131790 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:04.170215 1550381 cri.go:89] found id: ""
	I1218 01:52:04.170242 1550381 logs.go:282] 0 containers: []
	W1218 01:52:04.170251 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:04.170260 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:04.170273 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:04.211169 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:04.211207 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:04.263603 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:04.263636 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:04.319257 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:04.319294 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:04.334300 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:04.334329 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:04.399992 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:04.392155    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.392854    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394600    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.394909    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:04.396367    9946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:06.900248 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:06.910997 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:06.911067 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:06.935514 1550381 cri.go:89] found id: ""
	I1218 01:52:06.935539 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.935548 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:06.935554 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:06.935612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:06.959911 1550381 cri.go:89] found id: ""
	I1218 01:52:06.959933 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.959942 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:06.959949 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:06.960006 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:06.989689 1550381 cri.go:89] found id: ""
	I1218 01:52:06.989710 1550381 logs.go:282] 0 containers: []
	W1218 01:52:06.989719 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:06.989725 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:06.989783 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:07.016553 1550381 cri.go:89] found id: ""
	I1218 01:52:07.016578 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.016587 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:07.016594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:07.016676 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:07.042084 1550381 cri.go:89] found id: ""
	I1218 01:52:07.042106 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.042115 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:07.042121 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:07.042179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:07.067075 1550381 cri.go:89] found id: ""
	I1218 01:52:07.067097 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.067107 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:07.067113 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:07.067176 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:07.096366 1550381 cri.go:89] found id: ""
	I1218 01:52:07.096388 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.096398 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:07.096405 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:07.096465 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:07.125403 1550381 cri.go:89] found id: ""
	I1218 01:52:07.125426 1550381 logs.go:282] 0 containers: []
	W1218 01:52:07.125434 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:07.125444 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:07.125456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:07.146124 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:07.146152 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:07.254257 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:07.245617   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.246126   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.247718   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.248202   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:07.249840   10043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:07.254280 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:07.254292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:07.280552 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:07.280590 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:07.307796 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:07.307825 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:09.873637 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:09.884205 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:09.884275 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:09.909771 1550381 cri.go:89] found id: ""
	I1218 01:52:09.909796 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.909805 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:09.909812 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:09.909869 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:09.934051 1550381 cri.go:89] found id: ""
	I1218 01:52:09.934082 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.934092 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:09.934098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:09.934161 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:09.964504 1550381 cri.go:89] found id: ""
	I1218 01:52:09.964528 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.964550 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:09.964561 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:09.964662 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:09.990501 1550381 cri.go:89] found id: ""
	I1218 01:52:09.990525 1550381 logs.go:282] 0 containers: []
	W1218 01:52:09.990534 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:09.990543 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:09.990616 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:10.028312 1550381 cri.go:89] found id: ""
	I1218 01:52:10.028339 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.028348 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:10.028355 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:10.028419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:10.054415 1550381 cri.go:89] found id: ""
	I1218 01:52:10.054443 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.054453 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:10.054460 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:10.054545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:10.085976 1550381 cri.go:89] found id: ""
	I1218 01:52:10.086003 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.086013 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:10.086020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:10.086081 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:10.112422 1550381 cri.go:89] found id: ""
	I1218 01:52:10.112455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:10.112464 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:10.112473 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:10.112485 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:10.214552 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:10.196275   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.197311   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199133   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.199838   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:10.203593   10149 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:10.214579 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:10.214591 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:10.245834 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:10.245872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:10.278949 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:10.278983 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:10.338117 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:10.338153 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:12.853298 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:12.863919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:12.864003 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:12.888289 1550381 cri.go:89] found id: ""
	I1218 01:52:12.888315 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.888324 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:12.888330 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:12.888389 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:12.914281 1550381 cri.go:89] found id: ""
	I1218 01:52:12.914306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.914315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:12.914321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:12.914384 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:12.941058 1550381 cri.go:89] found id: ""
	I1218 01:52:12.941083 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.941092 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:12.941098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:12.941160 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:12.966998 1550381 cri.go:89] found id: ""
	I1218 01:52:12.967022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.967030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:12.967037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:12.967095 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:12.996005 1550381 cri.go:89] found id: ""
	I1218 01:52:12.996027 1550381 logs.go:282] 0 containers: []
	W1218 01:52:12.996036 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:12.996042 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:12.996099 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:13.023321 1550381 cri.go:89] found id: ""
	I1218 01:52:13.023345 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.023354 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:13.023360 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:13.023429 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:13.049195 1550381 cri.go:89] found id: ""
	I1218 01:52:13.049220 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.049229 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:13.049235 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:13.049295 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:13.074787 1550381 cri.go:89] found id: ""
	I1218 01:52:13.074816 1550381 logs.go:282] 0 containers: []
	W1218 01:52:13.074825 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:13.074835 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:13.074874 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:13.131893 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:13.131926 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:13.159867 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:13.159942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:13.281047 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:13.272958   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.273401   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.274898   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.275221   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:13.276713   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:13.281070 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:13.281089 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:13.307183 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:13.307217 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:15.837707 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:15.848404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:15.848478 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:15.873587 1550381 cri.go:89] found id: ""
	I1218 01:52:15.873615 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.873624 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:15.873630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:15.873689 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:15.897757 1550381 cri.go:89] found id: ""
	I1218 01:52:15.897780 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.897788 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:15.897795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:15.897852 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:15.923098 1550381 cri.go:89] found id: ""
	I1218 01:52:15.923123 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.923132 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:15.923138 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:15.923231 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:15.952891 1550381 cri.go:89] found id: ""
	I1218 01:52:15.952921 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.952929 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:15.952935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:15.952991 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:15.979178 1550381 cri.go:89] found id: ""
	I1218 01:52:15.979204 1550381 logs.go:282] 0 containers: []
	W1218 01:52:15.979212 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:15.979218 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:15.979276 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:16.007995 1550381 cri.go:89] found id: ""
	I1218 01:52:16.008022 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.008031 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:16.008038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:16.008101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:16.032581 1550381 cri.go:89] found id: ""
	I1218 01:52:16.032607 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.032616 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:16.032641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:16.032709 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:16.058847 1550381 cri.go:89] found id: ""
	I1218 01:52:16.058872 1550381 logs.go:282] 0 containers: []
	W1218 01:52:16.058881 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:16.058891 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:16.058902 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:16.116382 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:16.116416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:16.131483 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:16.131513 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:16.233031 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:16.217680   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.218790   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223136   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.223474   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:16.228595   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:16.233053 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:16.233066 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:16.262932 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:16.262966 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:18.790616 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:18.801658 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:18.801729 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:18.830076 1550381 cri.go:89] found id: ""
	I1218 01:52:18.830102 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.830112 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:18.830118 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:18.830179 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:18.855278 1550381 cri.go:89] found id: ""
	I1218 01:52:18.855306 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.855315 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:18.855321 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:18.855380 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:18.886976 1550381 cri.go:89] found id: ""
	I1218 01:52:18.886998 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.887012 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:18.887018 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:18.887078 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:18.911656 1550381 cri.go:89] found id: ""
	I1218 01:52:18.911678 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.911686 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:18.911692 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:18.911750 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:18.935981 1550381 cri.go:89] found id: ""
	I1218 01:52:18.936002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.936011 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:18.936017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:18.936074 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:18.960773 1550381 cri.go:89] found id: ""
	I1218 01:52:18.960795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.960804 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:18.960811 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:18.960871 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:18.985996 1550381 cri.go:89] found id: ""
	I1218 01:52:18.986023 1550381 logs.go:282] 0 containers: []
	W1218 01:52:18.986032 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:18.986039 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:18.986101 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:19.011618 1550381 cri.go:89] found id: ""
	I1218 01:52:19.011696 1550381 logs.go:282] 0 containers: []
	W1218 01:52:19.011719 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:19.011740 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:19.011766 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:19.027064 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:19.027093 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:19.094483 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:19.086145   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.086880   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088561   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.088988   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:19.090612   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:19.094507 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:19.094519 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:19.120053 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:19.120087 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:19.190394 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:19.190426 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:21.774413 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:21.785229 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:21.785300 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:21.814294 1550381 cri.go:89] found id: ""
	I1218 01:52:21.814316 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.814325 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:21.814331 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:21.814394 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:21.840168 1550381 cri.go:89] found id: ""
	I1218 01:52:21.840191 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.840200 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:21.840207 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:21.840267 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:21.865098 1550381 cri.go:89] found id: ""
	I1218 01:52:21.865120 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.865129 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:21.865134 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:21.865198 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:21.890513 1550381 cri.go:89] found id: ""
	I1218 01:52:21.890535 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.890543 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:21.890550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:21.890607 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:21.915362 1550381 cri.go:89] found id: ""
	I1218 01:52:21.915384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.915393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:21.915399 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:21.915457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:21.941078 1550381 cri.go:89] found id: ""
	I1218 01:52:21.941101 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.941110 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:21.941117 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:21.941182 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:21.965276 1550381 cri.go:89] found id: ""
	I1218 01:52:21.965302 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.965311 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:21.965318 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:21.965375 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:21.990348 1550381 cri.go:89] found id: ""
	I1218 01:52:21.990370 1550381 logs.go:282] 0 containers: []
	W1218 01:52:21.990378 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:21.990387 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:21.990398 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:22.046097 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:22.046132 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:22.061468 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:22.061498 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:22.129867 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:22.121557   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.122235   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.123742   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.124182   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:22.125683   10609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:22.129889 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:22.129901 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:22.160943 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:22.160982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:24.703063 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:24.713938 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:24.714009 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:24.739085 1550381 cri.go:89] found id: ""
	I1218 01:52:24.739167 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.739189 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:24.739209 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:24.739298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:24.763316 1550381 cri.go:89] found id: ""
	I1218 01:52:24.763359 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.763368 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:24.763374 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:24.763443 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:24.789401 1550381 cri.go:89] found id: ""
	I1218 01:52:24.789431 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.789441 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:24.789471 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:24.789558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:24.819426 1550381 cri.go:89] found id: ""
	I1218 01:52:24.819458 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.819468 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:24.819474 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:24.819547 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:24.844106 1550381 cri.go:89] found id: ""
	I1218 01:52:24.844143 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.844152 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:24.844159 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:24.844230 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:24.868116 1550381 cri.go:89] found id: ""
	I1218 01:52:24.868140 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.868149 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:24.868156 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:24.868213 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:24.892247 1550381 cri.go:89] found id: ""
	I1218 01:52:24.892280 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.892289 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:24.892311 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:24.892390 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:24.917988 1550381 cri.go:89] found id: ""
	I1218 01:52:24.918013 1550381 logs.go:282] 0 containers: []
	W1218 01:52:24.918022 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:24.918031 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:24.918060 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:24.972539 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:24.972571 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:24.987364 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:24.987391 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:25.066535 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:25.057583   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.058416   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.060357   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.061023   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:25.062670   10721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:25.066557 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:25.066572 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:25.093529 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:25.093573 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.627215 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:27.637795 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:27.637864 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:27.661825 1550381 cri.go:89] found id: ""
	I1218 01:52:27.661850 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.661859 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:27.661866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:27.661931 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:27.688769 1550381 cri.go:89] found id: ""
	I1218 01:52:27.688795 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.688803 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:27.688810 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:27.688895 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:27.714909 1550381 cri.go:89] found id: ""
	I1218 01:52:27.714992 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.715009 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:27.715017 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:27.715080 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:27.742595 1550381 cri.go:89] found id: ""
	I1218 01:52:27.742620 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.742628 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:27.742636 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:27.742695 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:27.768328 1550381 cri.go:89] found id: ""
	I1218 01:52:27.768353 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.768361 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:27.768368 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:27.768444 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:27.794968 1550381 cri.go:89] found id: ""
	I1218 01:52:27.794993 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.795003 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:27.795010 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:27.795094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:27.821560 1550381 cri.go:89] found id: ""
	I1218 01:52:27.821587 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.821597 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:27.821603 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:27.821679 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:27.846888 1550381 cri.go:89] found id: ""
	I1218 01:52:27.846912 1550381 logs.go:282] 0 containers: []
	W1218 01:52:27.846921 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:27.846930 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:27.846942 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:27.861757 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:27.861785 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:27.926373 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:27.916602   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.917603   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919230   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.919567   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:27.921199   10832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:27.926400 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:27.926413 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:27.951763 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:27.951803 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:27.984249 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:27.984278 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.543132 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:30.553809 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:30.553883 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:30.580729 1550381 cri.go:89] found id: ""
	I1218 01:52:30.580758 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.580767 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:30.580774 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:30.580837 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:30.611455 1550381 cri.go:89] found id: ""
	I1218 01:52:30.611479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.611488 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:30.611494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:30.611558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:30.637976 1550381 cri.go:89] found id: ""
	I1218 01:52:30.638002 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.638025 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:30.638049 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:30.638134 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:30.663110 1550381 cri.go:89] found id: ""
	I1218 01:52:30.663135 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.663144 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:30.663150 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:30.663211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:30.689367 1550381 cri.go:89] found id: ""
	I1218 01:52:30.689391 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.689401 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:30.689416 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:30.689480 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:30.714721 1550381 cri.go:89] found id: ""
	I1218 01:52:30.714747 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.714756 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:30.714764 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:30.714826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:30.740391 1550381 cri.go:89] found id: ""
	I1218 01:52:30.740419 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.740428 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:30.740438 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:30.740502 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:30.769197 1550381 cri.go:89] found id: ""
	I1218 01:52:30.769264 1550381 logs.go:282] 0 containers: []
	W1218 01:52:30.769286 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:30.769306 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:30.769337 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:30.825762 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:30.825799 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:30.840467 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:30.840497 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:30.907063 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:30.898565   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.899378   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901153   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.901681   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:30.903149   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:30.907085 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:30.907098 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:30.933175 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:30.933208 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.464940 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:33.477904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:33.477982 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:33.502677 1550381 cri.go:89] found id: ""
	I1218 01:52:33.502703 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.502711 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:33.502718 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:33.502778 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:33.528314 1550381 cri.go:89] found id: ""
	I1218 01:52:33.528341 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.528350 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:33.528356 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:33.528418 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:33.554186 1550381 cri.go:89] found id: ""
	I1218 01:52:33.554213 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.554221 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:33.554227 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:33.554286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:33.578717 1550381 cri.go:89] found id: ""
	I1218 01:52:33.578740 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.578751 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:33.578758 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:33.578819 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:33.603980 1550381 cri.go:89] found id: ""
	I1218 01:52:33.604054 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.604079 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:33.604098 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:33.604287 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:33.629122 1550381 cri.go:89] found id: ""
	I1218 01:52:33.629149 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.629158 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:33.629165 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:33.629248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:33.660229 1550381 cri.go:89] found id: ""
	I1218 01:52:33.660266 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.660281 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:33.660288 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:33.660356 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:33.685746 1550381 cri.go:89] found id: ""
	I1218 01:52:33.685812 1550381 logs.go:282] 0 containers: []
	W1218 01:52:33.685838 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:33.685854 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:33.685866 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:33.717052 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:33.717078 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:33.777106 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:33.777142 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:33.791689 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:33.791719 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:33.855601 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:33.847150   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.847890   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.849576   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.850251   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:33.851854   11077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:33.855621 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:33.855633 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:36.380440 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:36.395133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:36.395206 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:36.463112 1550381 cri.go:89] found id: ""
	I1218 01:52:36.463145 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.463154 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:36.463162 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:36.463235 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:36.489631 1550381 cri.go:89] found id: ""
	I1218 01:52:36.489656 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.489665 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:36.489671 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:36.489733 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:36.515149 1550381 cri.go:89] found id: ""
	I1218 01:52:36.515175 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.515186 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:36.515192 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:36.515253 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:36.543702 1550381 cri.go:89] found id: ""
	I1218 01:52:36.543727 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.543736 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:36.543743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:36.543802 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:36.568359 1550381 cri.go:89] found id: ""
	I1218 01:52:36.568384 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.568393 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:36.568400 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:36.568457 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:36.591933 1550381 cri.go:89] found id: ""
	I1218 01:52:36.591959 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.591968 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:36.591974 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:36.592033 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:36.619454 1550381 cri.go:89] found id: ""
	I1218 01:52:36.619479 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.619488 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:36.619494 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:36.619552 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:36.644231 1550381 cri.go:89] found id: ""
	I1218 01:52:36.644256 1550381 logs.go:282] 0 containers: []
	W1218 01:52:36.644265 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:36.644274 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:36.644286 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:36.673981 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:36.674008 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:36.730614 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:36.730648 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:36.745581 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:36.745614 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:36.808564 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:36.800393   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.801019   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.802683   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.803224   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:36.804801   11190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:36.808591 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:36.808604 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.334388 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:39.345831 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:39.345904 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:39.374463 1550381 cri.go:89] found id: ""
	I1218 01:52:39.374486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.374495 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:39.374501 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:39.374567 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:39.439153 1550381 cri.go:89] found id: ""
	I1218 01:52:39.439178 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.439187 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:39.439196 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:39.439255 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:39.483631 1550381 cri.go:89] found id: ""
	I1218 01:52:39.483655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.483664 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:39.483670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:39.483746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:39.513656 1550381 cri.go:89] found id: ""
	I1218 01:52:39.513681 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.513689 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:39.513695 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:39.513757 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:39.538364 1550381 cri.go:89] found id: ""
	I1218 01:52:39.538389 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.538397 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:39.538404 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:39.538469 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:39.562963 1550381 cri.go:89] found id: ""
	I1218 01:52:39.562989 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.562997 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:39.563004 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:39.563063 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:39.590225 1550381 cri.go:89] found id: ""
	I1218 01:52:39.590247 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.590255 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:39.590261 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:39.590317 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:39.619590 1550381 cri.go:89] found id: ""
	I1218 01:52:39.619613 1550381 logs.go:282] 0 containers: []
	W1218 01:52:39.619622 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:39.619631 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:39.619642 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:39.645098 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:39.645133 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:39.675338 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:39.675370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:39.731953 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:39.731988 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:39.746929 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:39.746957 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:39.815336 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:39.807111   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.807747   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809330   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.809940   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:39.811532   11304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.315631 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:42.327549 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:42.327635 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:42.355093 1550381 cri.go:89] found id: ""
	I1218 01:52:42.355117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.355126 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:42.355133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:42.355193 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:42.383724 1550381 cri.go:89] found id: ""
	I1218 01:52:42.383746 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.383755 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:42.383763 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:42.383822 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:42.439728 1550381 cri.go:89] found id: ""
	I1218 01:52:42.439752 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.439761 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:42.439767 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:42.439826 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:42.485723 1550381 cri.go:89] found id: ""
	I1218 01:52:42.485751 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.485760 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:42.485766 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:42.485835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:42.518003 1550381 cri.go:89] found id: ""
	I1218 01:52:42.518030 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.518040 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:42.518046 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:42.518105 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:42.542509 1550381 cri.go:89] found id: ""
	I1218 01:52:42.542534 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.542543 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:42.542550 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:42.542608 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:42.567103 1550381 cri.go:89] found id: ""
	I1218 01:52:42.567127 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.567135 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:42.567144 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:42.567210 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:42.591556 1550381 cri.go:89] found id: ""
	I1218 01:52:42.591623 1550381 logs.go:282] 0 containers: []
	W1218 01:52:42.591648 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:42.591670 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:42.591708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:42.622840 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:42.622867 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:42.677917 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:42.677950 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:42.692666 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:42.692699 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:42.765474 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:42.757065   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.757907   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759378   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.759855   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:42.761353   11413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:42.765497 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:42.765509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.291290 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:45.308807 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:45.308972 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:45.342117 1550381 cri.go:89] found id: ""
	I1218 01:52:45.342151 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.342160 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:45.342168 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:45.342233 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:45.370490 1550381 cri.go:89] found id: ""
	I1218 01:52:45.370516 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.370525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:45.370531 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:45.370612 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:45.416227 1550381 cri.go:89] found id: ""
	I1218 01:52:45.416262 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.416272 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:45.416278 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:45.416359 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:45.475986 1550381 cri.go:89] found id: ""
	I1218 01:52:45.476010 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.476019 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:45.476026 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:45.476089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:45.505307 1550381 cri.go:89] found id: ""
	I1218 01:52:45.505375 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.505400 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:45.505419 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:45.505520 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:45.531649 1550381 cri.go:89] found id: ""
	I1218 01:52:45.531676 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.531685 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:45.531691 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:45.531762 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:45.557231 1550381 cri.go:89] found id: ""
	I1218 01:52:45.557258 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.557268 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:45.557274 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:45.557332 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:45.581819 1550381 cri.go:89] found id: ""
	I1218 01:52:45.581846 1550381 logs.go:282] 0 containers: []
	W1218 01:52:45.581855 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:45.581864 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:45.581876 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:45.637946 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:45.637982 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:45.653092 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:45.653127 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:45.733673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:45.725909   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.726495   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.727841   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.728307   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:45.729804   11517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:45.733695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:45.733708 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:45.759208 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:45.759243 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:48.291278 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:48.302161 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:48.302234 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:48.326549 1550381 cri.go:89] found id: ""
	I1218 01:52:48.326572 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.326580 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:48.326587 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:48.326647 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:48.355829 1550381 cri.go:89] found id: ""
	I1218 01:52:48.355853 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.355863 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:48.355869 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:48.355927 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:48.384367 1550381 cri.go:89] found id: ""
	I1218 01:52:48.384404 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.384414 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:48.384421 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:48.384495 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:48.440457 1550381 cri.go:89] found id: ""
	I1218 01:52:48.440486 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.440495 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:48.440502 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:48.440572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:48.484538 1550381 cri.go:89] found id: ""
	I1218 01:52:48.484565 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.484574 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:48.484580 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:48.484671 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:48.517629 1550381 cri.go:89] found id: ""
	I1218 01:52:48.517655 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.517664 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:48.517670 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:48.517727 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:48.544213 1550381 cri.go:89] found id: ""
	I1218 01:52:48.544250 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.544259 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:48.544268 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:48.544338 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:48.571178 1550381 cri.go:89] found id: ""
	I1218 01:52:48.571214 1550381 logs.go:282] 0 containers: []
	W1218 01:52:48.571224 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:48.571233 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:48.571244 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:48.629108 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:48.629154 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:48.644078 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:48.644105 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:48.710322 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:48.701933   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.702491   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704137   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.704712   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:48.706352   11632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:48.710345 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:48.710357 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:48.735873 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:48.735908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:51.264224 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:51.274867 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:51.274936 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:51.302544 1550381 cri.go:89] found id: ""
	I1218 01:52:51.302574 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.302582 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:51.302591 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:51.302650 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:51.326887 1550381 cri.go:89] found id: ""
	I1218 01:52:51.326920 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.326929 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:51.326935 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:51.326996 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:51.355805 1550381 cri.go:89] found id: ""
	I1218 01:52:51.355833 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.355842 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:51.355849 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:51.355910 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:51.385402 1550381 cri.go:89] found id: ""
	I1218 01:52:51.385475 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.385502 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:51.385516 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:51.385597 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:51.429600 1550381 cri.go:89] found id: ""
	I1218 01:52:51.429679 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.429705 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:51.429723 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:51.429795 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:51.482295 1550381 cri.go:89] found id: ""
	I1218 01:52:51.482362 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.482386 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:51.482406 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:51.482483 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:51.509210 1550381 cri.go:89] found id: ""
	I1218 01:52:51.509282 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.509307 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:51.509319 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:51.509392 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:51.534258 1550381 cri.go:89] found id: ""
	I1218 01:52:51.534335 1550381 logs.go:282] 0 containers: []
	W1218 01:52:51.534359 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:51.534374 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:51.534399 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:51.590233 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:51.590266 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:51.604772 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:51.604807 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:51.669210 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:51.660468   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.661850   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.662312   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.663995   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:51.664345   11747 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:51.669233 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:51.669245 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:51.694168 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:51.694201 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:54.225084 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:54.235834 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:54.235909 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:54.263169 1550381 cri.go:89] found id: ""
	I1218 01:52:54.263202 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.263212 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:54.263219 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:54.263286 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:54.288775 1550381 cri.go:89] found id: ""
	I1218 01:52:54.288801 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.288812 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:54.288818 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:54.288881 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:54.313424 1550381 cri.go:89] found id: ""
	I1218 01:52:54.313455 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.313463 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:54.313470 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:54.313545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:54.337557 1550381 cri.go:89] found id: ""
	I1218 01:52:54.337586 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.337595 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:54.337604 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:54.337660 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:54.362944 1550381 cri.go:89] found id: ""
	I1218 01:52:54.362968 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.362976 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:54.362983 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:54.363055 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:54.405526 1550381 cri.go:89] found id: ""
	I1218 01:52:54.405546 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.405554 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:54.405560 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:54.405617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:54.470952 1550381 cri.go:89] found id: ""
	I1218 01:52:54.470975 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.470983 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:54.470995 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:54.471051 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:54.499299 1550381 cri.go:89] found id: ""
	I1218 01:52:54.499324 1550381 logs.go:282] 0 containers: []
	W1218 01:52:54.499332 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:54.499341 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:54.499352 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:54.554755 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:54.554791 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:52:54.569411 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:54.569439 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:54.630717 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:54.622173   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.622694   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.623736   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625233   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:54.625729   11858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:54.630737 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:54.630751 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:54.656160 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:54.656197 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.184460 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:52:57.195292 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:52:57.195360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:52:57.220784 1550381 cri.go:89] found id: ""
	I1218 01:52:57.220821 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.220831 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:52:57.220837 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:52:57.220911 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:52:57.245470 1550381 cri.go:89] found id: ""
	I1218 01:52:57.245493 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.245501 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:52:57.245508 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:52:57.245572 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:52:57.271053 1550381 cri.go:89] found id: ""
	I1218 01:52:57.271076 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.271084 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:52:57.271091 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:52:57.271149 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:52:57.297094 1550381 cri.go:89] found id: ""
	I1218 01:52:57.297117 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.297125 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:52:57.297132 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:52:57.297189 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:52:57.321869 1550381 cri.go:89] found id: ""
	I1218 01:52:57.321903 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.321913 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:52:57.321919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:52:57.321980 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:52:57.346700 1550381 cri.go:89] found id: ""
	I1218 01:52:57.346726 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.346736 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:52:57.346743 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:52:57.346804 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:52:57.371462 1550381 cri.go:89] found id: ""
	I1218 01:52:57.371487 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.371496 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:52:57.371503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:52:57.371561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:52:57.408706 1550381 cri.go:89] found id: ""
	I1218 01:52:57.408725 1550381 logs.go:282] 0 containers: []
	W1218 01:52:57.408733 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:52:57.408742 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:52:57.408754 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:52:57.518131 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:52:57.510001   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.510418   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512044   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.512702   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:52:57.514351   11965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:52:57.518152 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:52:57.518165 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:52:57.544836 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:52:57.544872 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:52:57.572743 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:52:57.572782 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:52:57.635526 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:52:57.635567 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.150459 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:00.169757 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:00.169839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:00.240442 1550381 cri.go:89] found id: ""
	I1218 01:53:00.240472 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.240482 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:00.240489 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:00.240568 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:00.297137 1550381 cri.go:89] found id: ""
	I1218 01:53:00.297224 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.297243 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:00.297253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:00.297363 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:00.336217 1550381 cri.go:89] found id: ""
	I1218 01:53:00.336242 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.336251 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:00.336259 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:00.336333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:00.365991 1550381 cri.go:89] found id: ""
	I1218 01:53:00.366020 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.366030 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:00.366037 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:00.366107 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:00.425076 1550381 cri.go:89] found id: ""
	I1218 01:53:00.425152 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.425177 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:00.425198 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:00.425310 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:00.464180 1550381 cri.go:89] found id: ""
	I1218 01:53:00.464259 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.464291 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:00.464313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:00.464419 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:00.498012 1550381 cri.go:89] found id: ""
	I1218 01:53:00.498088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.498112 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:00.498133 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:00.498248 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:00.526153 1550381 cri.go:89] found id: ""
	I1218 01:53:00.526228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:00.526250 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:00.526271 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:00.526313 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:00.581384 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:00.581418 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:00.596391 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:00.596467 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:00.665518 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:00.656710   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.657369   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659279   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.659812   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:00.661528   12079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:00.665541 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:00.665554 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:00.691014 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:00.691052 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:03.221071 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:03.232071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:03.232143 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:03.256975 1550381 cri.go:89] found id: ""
	I1218 01:53:03.256998 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.257006 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:03.257012 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:03.257070 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:03.286981 1550381 cri.go:89] found id: ""
	I1218 01:53:03.287006 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.287021 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:03.287028 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:03.287089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:03.315833 1550381 cri.go:89] found id: ""
	I1218 01:53:03.315858 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.315867 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:03.315873 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:03.315935 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:03.343588 1550381 cri.go:89] found id: ""
	I1218 01:53:03.343611 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.343619 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:03.343626 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:03.343684 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:03.369440 1550381 cri.go:89] found id: ""
	I1218 01:53:03.369469 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.369478 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:03.369485 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:03.369545 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:03.428115 1550381 cri.go:89] found id: ""
	I1218 01:53:03.428138 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.428147 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:03.428154 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:03.428211 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:03.484823 1550381 cri.go:89] found id: ""
	I1218 01:53:03.484847 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.484856 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:03.484862 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:03.484920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:03.512094 1550381 cri.go:89] found id: ""
	I1218 01:53:03.512119 1550381 logs.go:282] 0 containers: []
	W1218 01:53:03.512128 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:03.512139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:03.512150 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:03.568376 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:03.568411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:03.583603 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:03.583632 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:03.651107 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:03.641448   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.642529   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.644209   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.645062   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:03.646724   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:03.651129 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:03.651143 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:03.676088 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:03.676125 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.206266 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:06.217464 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:06.217558 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:06.242745 1550381 cri.go:89] found id: ""
	I1218 01:53:06.242770 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.242779 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:06.242786 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:06.242846 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:06.267735 1550381 cri.go:89] found id: ""
	I1218 01:53:06.267757 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.267765 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:06.267771 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:06.267834 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:06.297274 1550381 cri.go:89] found id: ""
	I1218 01:53:06.297297 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.297306 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:06.297313 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:06.297372 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:06.326794 1550381 cri.go:89] found id: ""
	I1218 01:53:06.326820 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.326829 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:06.326835 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:06.326893 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:06.351519 1550381 cri.go:89] found id: ""
	I1218 01:53:06.351543 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.351552 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:06.351558 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:06.351617 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:06.378499 1550381 cri.go:89] found id: ""
	I1218 01:53:06.378525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.378534 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:06.378540 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:06.378598 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:06.414203 1550381 cri.go:89] found id: ""
	I1218 01:53:06.414236 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.414246 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:06.414252 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:06.414316 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:06.493089 1550381 cri.go:89] found id: ""
	I1218 01:53:06.493116 1550381 logs.go:282] 0 containers: []
	W1218 01:53:06.493125 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:06.493134 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:06.493147 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:06.522114 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:06.522145 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:06.578855 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:06.578891 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:06.594005 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:06.594033 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:06.658779 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:06.650476   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.651243   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.652788   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.653284   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:06.654784   12316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:06.658800 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:06.658814 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.183921 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:09.194857 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:09.194928 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:09.218740 1550381 cri.go:89] found id: ""
	I1218 01:53:09.218764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.218772 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:09.218778 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:09.218835 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:09.243853 1550381 cri.go:89] found id: ""
	I1218 01:53:09.243879 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.243888 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:09.243894 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:09.243954 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:09.269591 1550381 cri.go:89] found id: ""
	I1218 01:53:09.269615 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.269624 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:09.269630 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:09.269691 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:09.299082 1550381 cri.go:89] found id: ""
	I1218 01:53:09.299120 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.299129 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:09.299136 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:09.299207 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:09.324088 1550381 cri.go:89] found id: ""
	I1218 01:53:09.324121 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.324131 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:09.324137 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:09.324203 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:09.348898 1550381 cri.go:89] found id: ""
	I1218 01:53:09.348921 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.348930 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:09.348936 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:09.348997 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:09.374245 1550381 cri.go:89] found id: ""
	I1218 01:53:09.374268 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.374279 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:09.374286 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:09.374346 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:09.413630 1550381 cri.go:89] found id: ""
	I1218 01:53:09.413653 1550381 logs.go:282] 0 containers: []
	W1218 01:53:09.413662 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:09.413672 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:09.413689 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:09.474660 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:09.474685 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:09.541382 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:09.534111   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.534512   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.535991   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.536308   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:09.537731   12417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:09.541403 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:09.541416 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:09.566761 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:09.566792 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:09.593984 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:09.594011 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.149658 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:12.160130 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:12.160258 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:12.185266 1550381 cri.go:89] found id: ""
	I1218 01:53:12.185339 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.185356 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:12.185363 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:12.185434 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:12.212092 1550381 cri.go:89] found id: ""
	I1218 01:53:12.212124 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.212133 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:12.212139 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:12.212205 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:12.235977 1550381 cri.go:89] found id: ""
	I1218 01:53:12.236009 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.236018 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:12.236024 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:12.236091 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:12.260037 1550381 cri.go:89] found id: ""
	I1218 01:53:12.260069 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.260079 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:12.260085 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:12.260151 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:12.285034 1550381 cri.go:89] found id: ""
	I1218 01:53:12.285060 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.285069 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:12.285075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:12.285142 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:12.309185 1550381 cri.go:89] found id: ""
	I1218 01:53:12.309221 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.309231 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:12.309256 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:12.309330 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:12.333588 1550381 cri.go:89] found id: ""
	I1218 01:53:12.333613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.333622 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:12.333629 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:12.333697 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:12.362204 1550381 cri.go:89] found id: ""
	I1218 01:53:12.362228 1550381 logs.go:282] 0 containers: []
	W1218 01:53:12.362237 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:12.362246 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:12.362292 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:12.427192 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:12.431443 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:12.465023 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:12.465048 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:12.534431 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:12.526324   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.526831   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528540   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.528945   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:12.530443   12532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:12.534453 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:12.534465 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:12.560311 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:12.560349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:15.088443 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:15.100075 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:15.100170 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:15.126386 1550381 cri.go:89] found id: ""
	I1218 01:53:15.126410 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.126419 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:15.126425 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:15.126493 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:15.152426 1550381 cri.go:89] found id: ""
	I1218 01:53:15.152450 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.152459 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:15.152466 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:15.152529 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:15.178155 1550381 cri.go:89] found id: ""
	I1218 01:53:15.178184 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.178193 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:15.178199 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:15.178263 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:15.203664 1550381 cri.go:89] found id: ""
	I1218 01:53:15.203687 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.203696 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:15.203703 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:15.203767 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:15.228792 1550381 cri.go:89] found id: ""
	I1218 01:53:15.228815 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.228823 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:15.228830 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:15.228891 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:15.257550 1550381 cri.go:89] found id: ""
	I1218 01:53:15.257575 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.257585 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:15.257594 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:15.257656 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:15.283324 1550381 cri.go:89] found id: ""
	I1218 01:53:15.283350 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.283359 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:15.283365 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:15.283430 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:15.311422 1550381 cri.go:89] found id: ""
	I1218 01:53:15.311455 1550381 logs.go:282] 0 containers: []
	W1218 01:53:15.311465 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:15.311474 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:15.311486 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:15.367419 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:15.367456 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:15.382340 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:15.382370 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:15.500526 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:15.489021   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.489409   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.492963   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.494907   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:15.495528   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:15.500551 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:15.500563 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:15.527154 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:15.527190 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:18.057588 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:18.068726 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:18.068799 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:18.096722 1550381 cri.go:89] found id: ""
	I1218 01:53:18.096859 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.096895 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:18.096919 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:18.097001 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:18.121827 1550381 cri.go:89] found id: ""
	I1218 01:53:18.121851 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.121860 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:18.121866 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:18.121932 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:18.146993 1550381 cri.go:89] found id: ""
	I1218 01:53:18.147018 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.147028 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:18.147034 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:18.147094 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:18.171236 1550381 cri.go:89] found id: ""
	I1218 01:53:18.171258 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.171266 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:18.171272 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:18.171333 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:18.199330 1550381 cri.go:89] found id: ""
	I1218 01:53:18.199355 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.199367 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:18.199373 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:18.199432 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:18.225625 1550381 cri.go:89] found id: ""
	I1218 01:53:18.225649 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.225659 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:18.225666 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:18.225746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:18.250702 1550381 cri.go:89] found id: ""
	I1218 01:53:18.250725 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.250734 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:18.250741 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:18.250854 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:18.276500 1550381 cri.go:89] found id: ""
	I1218 01:53:18.276525 1550381 logs.go:282] 0 containers: []
	W1218 01:53:18.276534 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:18.276543 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:18.276559 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:18.333753 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:18.333788 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:18.350466 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:18.350520 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:18.431435 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:18.419698   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.420102   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421430   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.421827   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:18.424073   12757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:18.431467 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:18.431480 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:18.463849 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:18.463889 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:21.008824 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:21.019970 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:21.020040 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:21.044583 1550381 cri.go:89] found id: ""
	I1218 01:53:21.044607 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.044616 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:21.044641 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:21.044701 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:21.069261 1550381 cri.go:89] found id: ""
	I1218 01:53:21.069286 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.069295 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:21.069301 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:21.069360 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:21.099196 1550381 cri.go:89] found id: ""
	I1218 01:53:21.099219 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.099228 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:21.099234 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:21.099298 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:21.124519 1550381 cri.go:89] found id: ""
	I1218 01:53:21.124541 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.124550 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:21.124556 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:21.124707 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:21.153447 1550381 cri.go:89] found id: ""
	I1218 01:53:21.153474 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.153483 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:21.153503 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:21.153561 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:21.178670 1550381 cri.go:89] found id: ""
	I1218 01:53:21.178694 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.178702 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:21.178709 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:21.178770 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:21.207919 1550381 cri.go:89] found id: ""
	I1218 01:53:21.207944 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.207953 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:21.207959 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:21.208017 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:21.232478 1550381 cri.go:89] found id: ""
	I1218 01:53:21.232503 1550381 logs.go:282] 0 containers: []
	W1218 01:53:21.232512 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:21.232521 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:21.232533 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:21.287757 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:21.287789 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:21.302312 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:21.302349 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:21.366377 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:21.358420   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.358816   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360484   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.360966   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:21.362554   12870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:21.366399 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:21.366411 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:21.393029 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:21.393110 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:23.948667 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:23.959340 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:23.959436 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:23.986999 1550381 cri.go:89] found id: ""
	I1218 01:53:23.987024 1550381 logs.go:282] 0 containers: []
	W1218 01:53:23.987033 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:23.987040 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:23.987103 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:24.020720 1550381 cri.go:89] found id: ""
	I1218 01:53:24.020799 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.020833 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:24.020846 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:24.020920 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:24.047235 1550381 cri.go:89] found id: ""
	I1218 01:53:24.047267 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.047283 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:24.047299 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:24.047373 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:24.080575 1550381 cri.go:89] found id: ""
	I1218 01:53:24.080599 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.080608 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:24.080615 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:24.080706 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:24.105557 1550381 cri.go:89] found id: ""
	I1218 01:53:24.105585 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.105595 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:24.105601 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:24.105661 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:24.130738 1550381 cri.go:89] found id: ""
	I1218 01:53:24.130764 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.130773 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:24.130779 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:24.130839 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:24.159061 1550381 cri.go:89] found id: ""
	I1218 01:53:24.159088 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.159097 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:24.159104 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:24.159166 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:24.187647 1550381 cri.go:89] found id: ""
	I1218 01:53:24.187674 1550381 logs.go:282] 0 containers: []
	W1218 01:53:24.187684 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:24.187694 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:24.187704 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:24.242513 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:24.242544 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:24.257316 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:24.257396 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:24.320000 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:24.312222   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.312775   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314002   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.314568   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:24.316156   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:24.320020 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:24.320037 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:24.346099 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:24.346136 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:26.873531 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:26.885238 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:26.885314 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:26.910216 1550381 cri.go:89] found id: ""
	I1218 01:53:26.910239 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.910247 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:26.910253 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:26.910313 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:26.933448 1550381 cri.go:89] found id: ""
	I1218 01:53:26.933475 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.933484 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:26.933490 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:26.933553 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:26.957855 1550381 cri.go:89] found id: ""
	I1218 01:53:26.957888 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.957897 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:26.957904 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:26.957979 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:26.982293 1550381 cri.go:89] found id: ""
	I1218 01:53:26.982357 1550381 logs.go:282] 0 containers: []
	W1218 01:53:26.982373 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:26.982380 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:26.982445 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:27.008361 1550381 cri.go:89] found id: ""
	I1218 01:53:27.008398 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.008408 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:27.008415 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:27.008475 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:27.037587 1550381 cri.go:89] found id: ""
	I1218 01:53:27.037613 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.037622 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:27.037628 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:27.037686 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:27.065312 1550381 cri.go:89] found id: ""
	I1218 01:53:27.065376 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.065401 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:27.065423 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:27.065510 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:27.090401 1550381 cri.go:89] found id: ""
	I1218 01:53:27.090427 1550381 logs.go:282] 0 containers: []
	W1218 01:53:27.090435 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:27.090445 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:27.090457 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:27.105745 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:27.105773 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:27.166883 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:27.158743   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.159249   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.160767   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.161180   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:27.162645   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:27.166902 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:27.166917 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:27.192695 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:27.192732 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:27.224139 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:27.224167 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:29.783401 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:29.794627 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:29.794738 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:29.819835 1550381 cri.go:89] found id: ""
	I1218 01:53:29.819862 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.819872 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:29.819879 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:29.819939 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:29.844881 1550381 cri.go:89] found id: ""
	I1218 01:53:29.844910 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.844919 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:29.844925 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:29.844986 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:29.869995 1550381 cri.go:89] found id: ""
	I1218 01:53:29.870023 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.870032 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:29.870038 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:29.870100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:29.895647 1550381 cri.go:89] found id: ""
	I1218 01:53:29.895671 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.895681 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:29.895687 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:29.895746 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:29.922749 1550381 cri.go:89] found id: ""
	I1218 01:53:29.922773 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.922782 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:29.922788 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:29.922847 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:29.948026 1550381 cri.go:89] found id: ""
	I1218 01:53:29.948052 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.948061 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:29.948071 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:29.948129 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:29.974575 1550381 cri.go:89] found id: ""
	I1218 01:53:29.974598 1550381 logs.go:282] 0 containers: []
	W1218 01:53:29.974607 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:29.974614 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:29.974673 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:30.004723 1550381 cri.go:89] found id: ""
	I1218 01:53:30.004807 1550381 logs.go:282] 0 containers: []
	W1218 01:53:30.004831 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:30.004861 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:30.004908 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:30.103939 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:30.103976 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:30.120775 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:30.120815 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:30.191673 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:30.183408   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.184288   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.185882   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.186462   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:30.187470   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:30.191695 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:30.191707 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:30.218142 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:30.218175 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:32.750923 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:32.764019 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1218 01:53:32.764089 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 01:53:32.789861 1550381 cri.go:89] found id: ""
	I1218 01:53:32.789885 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.789894 1550381 logs.go:284] No container was found matching "kube-apiserver"
	I1218 01:53:32.789900 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1218 01:53:32.789967 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 01:53:32.821480 1550381 cri.go:89] found id: ""
	I1218 01:53:32.821513 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.821525 1550381 logs.go:284] No container was found matching "etcd"
	I1218 01:53:32.821532 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1218 01:53:32.821601 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 01:53:32.847702 1550381 cri.go:89] found id: ""
	I1218 01:53:32.847733 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.847744 1550381 logs.go:284] No container was found matching "coredns"
	I1218 01:53:32.847751 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1218 01:53:32.847811 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 01:53:32.872820 1550381 cri.go:89] found id: ""
	I1218 01:53:32.872845 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.872855 1550381 logs.go:284] No container was found matching "kube-scheduler"
	I1218 01:53:32.872861 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1218 01:53:32.872976 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 01:53:32.901902 1550381 cri.go:89] found id: ""
	I1218 01:53:32.901975 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.902012 1550381 logs.go:284] No container was found matching "kube-proxy"
	I1218 01:53:32.902020 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 01:53:32.902100 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 01:53:32.926991 1550381 cri.go:89] found id: ""
	I1218 01:53:32.927016 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.927024 1550381 logs.go:284] No container was found matching "kube-controller-manager"
	I1218 01:53:32.927031 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1218 01:53:32.927093 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 01:53:32.951930 1550381 cri.go:89] found id: ""
	I1218 01:53:32.951957 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.951966 1550381 logs.go:284] No container was found matching "kindnet"
	I1218 01:53:32.951972 1550381 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1218 01:53:32.952034 1550381 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1218 01:53:32.977838 1550381 cri.go:89] found id: ""
	I1218 01:53:32.977864 1550381 logs.go:282] 0 containers: []
	W1218 01:53:32.977874 1550381 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1218 01:53:32.977883 1550381 logs.go:123] Gathering logs for describe nodes ...
	I1218 01:53:32.977894 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1218 01:53:33.047486 1550381 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1218 01:53:33.037555   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.038730   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.039716   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041395   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:33.041745   13318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1218 01:53:33.047516 1550381 logs.go:123] Gathering logs for containerd ...
	I1218 01:53:33.047530 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1218 01:53:33.074046 1550381 logs.go:123] Gathering logs for container status ...
	I1218 01:53:33.074084 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 01:53:33.106481 1550381 logs.go:123] Gathering logs for kubelet ...
	I1218 01:53:33.106509 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 01:53:33.164051 1550381 logs.go:123] Gathering logs for dmesg ...
	I1218 01:53:33.164095 1550381 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 01:53:35.679393 1550381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:53:35.706090 1550381 out.go:203] 
	W1218 01:53:35.709129 1550381 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1218 01:53:35.709179 1550381 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1218 01:53:35.709189 1550381 out.go:285] * Related issues:
	W1218 01:53:35.709204 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1218 01:53:35.709225 1550381 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1218 01:53:35.712031 1550381 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058634955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058646516Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058675996Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058690896Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058702449Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058719162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058734998Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058749521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058766029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.058797364Z" level=info msg="Connect containerd service"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.059062129Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.059621443Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.078574656Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.078669144Z" level=info msg="Start recovering state"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.079191052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.079329806Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117026802Z" level=info msg="Start event monitor"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117092737Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117103362Z" level=info msg="Start streaming server"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117113224Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117122127Z" level=info msg="runtime interface starting up..."
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117129035Z" level=info msg="starting plugins..."
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.117373017Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:47:32 newest-cni-120615 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 01:47:32 newest-cni-120615 containerd[555]: time="2025-12-18T01:47:32.118837196Z" level=info msg="containerd successfully booted in 0.082564s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 01:53:48.789685   13997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:48.790085   13997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:48.791513   13997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:48.791946   13997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 01:53:48.793354   13997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:53:48 up  8:36,  0 user,  load average: 0.41, 0.55, 1.13
	Linux newest-cni-120615 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 01:53:45 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:46 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 18 01:53:46 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:46 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:46 newest-cni-120615 kubelet[13860]: E1218 01:53:46.470296   13860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:46 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:46 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:47 newest-cni-120615 kubelet[13882]: E1218 01:53:47.203776   13882 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:47 newest-cni-120615 kubelet[13902]: E1218 01:53:47.959940   13902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:47 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 01:53:48 newest-cni-120615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 18 01:53:48 newest-cni-120615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:48 newest-cni-120615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 01:53:48 newest-cni-120615 kubelet[13976]: E1218 01:53:48.710650   13976 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 01:53:48 newest-cni-120615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 01:53:48 newest-cni-120615 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-120615 -n newest-cni-120615: exit status 2 (380.632957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-120615" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (287.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:56:43.269885 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:57:34.751911 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:57:57.379389 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1218 01:57:59.729519 1261148 config.go:182] Loaded profile config "calico-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:58:08.301911 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:58:25.214335 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:59:20.449100 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:59:45.715273 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:45.721700 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:45.733001 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:45.754357 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:45.795776 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:45.878011 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:46.040838 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:59:46.362520 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:59:47.004817 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:59:48.286532 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:59:50.848278 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 01:59:55.972653 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:00:04.395256 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:00:26.696436 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:01:07.658647 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:01:10.621388 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:10.627849 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:10.639277 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:01:10.661158 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:10.702590 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:10.784023 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:10.945560 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:11.267302 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:01:11.680912 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:01:11.909509 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:01:13.191559 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1218 02:01:15.753633 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 2 (323.005395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-970975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-970975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.961µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-970975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-970975
helpers_test.go:244: (dbg) docker inspect no-preload-970975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	        "Created": "2025-12-18T01:31:17.073767234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1542592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-18T01:41:17.647711914Z",
	            "FinishedAt": "2025-12-18T01:41:16.31019941Z"
	        },
	        "Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
	        "ResolvConfPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/hosts",
	        "LogPath": "/var/lib/docker/containers/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d/b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d-json.log",
	        "Name": "/no-preload-970975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-970975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-970975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1403d931305bac923396ecd683086676c177077d70b257369e8884c1383647d",
	                "LowerDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3692e9d5d52635ad98c41a67808d2995bf8f6bb6cd9b1f586b28c7aab8ace5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-970975",
	                "Source": "/var/lib/docker/volumes/no-preload-970975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-970975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-970975",
	                "name.minikube.sigs.k8s.io": "no-preload-970975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8868484521f3c95b5d3384207de825b735eca41ce409d5b6097489f36adbd1f",
	            "SandboxKey": "/var/run/docker/netns/a8868484521f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34213"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34214"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34215"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-970975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:c4:c7:ad:db:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ab8f39244bcf5d4900f2aa17c2792aefcc54582c4c250699be8d71c4c2a27a9",
	                    "EndpointID": "f645b66df5fb6b54a71529960c16fc0d0eda8d0c9be9273792de657fffcd9b75",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-970975",
	                        "b1403d931305"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 2 (342.076967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-970975 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-459533 sudo systemctl status kubelet --all --full --no-pager                                                                                        │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │ 18 Dec 25 01:59 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl cat kubelet --no-pager                                                                                                        │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │ 18 Dec 25 01:59 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                         │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │ 18 Dec 25 01:59 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cat /etc/kubernetes/kubelet.conf                                                                                                        │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │ 18 Dec 25 01:59 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cat /var/lib/kubelet/config.yaml                                                                                                        │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │ 18 Dec 25 01:59 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl status docker --all --full --no-pager                                                                                         │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │                     │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl cat docker --no-pager                                                                                                         │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │ 18 Dec 25 01:59 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cat /etc/docker/daemon.json                                                                                                             │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │                     │
	│ ssh     │ -p custom-flannel-459533 sudo docker system info                                                                                                                      │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 01:59 UTC │                     │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl status cri-docker --all --full --no-pager                                                                                     │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │                     │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl cat cri-docker --no-pager                                                                                                     │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │                     │
	│ ssh     │ -p custom-flannel-459533 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                          │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cri-dockerd --version                                                                                                                   │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl status containerd --all --full --no-pager                                                                                     │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl cat containerd --no-pager                                                                                                     │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cat /lib/systemd/system/containerd.service                                                                                              │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo cat /etc/containerd/config.toml                                                                                                         │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo containerd config dump                                                                                                                  │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl status crio --all --full --no-pager                                                                                           │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │                     │
	│ ssh     │ -p custom-flannel-459533 sudo systemctl cat crio --no-pager                                                                                                           │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                 │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ ssh     │ -p custom-flannel-459533 sudo crio config                                                                                                                             │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ delete  │ -p custom-flannel-459533                                                                                                                                              │ custom-flannel-459533     │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │ 18 Dec 25 02:00 UTC │
	│ start   │ -p enable-default-cni-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd │ enable-default-cni-459533 │ jenkins │ v1.37.0 │ 18 Dec 25 02:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 02:00:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 02:00:08.890417 1597634 out.go:360] Setting OutFile to fd 1 ...
	I1218 02:00:08.890594 1597634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 02:00:08.890605 1597634 out.go:374] Setting ErrFile to fd 2...
	I1218 02:00:08.890611 1597634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 02:00:08.890875 1597634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 02:00:08.891338 1597634 out.go:368] Setting JSON to false
	I1218 02:00:08.892228 1597634 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":31355,"bootTime":1765991854,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 02:00:08.892297 1597634 start.go:143] virtualization:  
	I1218 02:00:08.896877 1597634 out.go:179] * [enable-default-cni-459533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 02:00:08.901682 1597634 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 02:00:08.901811 1597634 notify.go:221] Checking for updates...
	I1218 02:00:08.908665 1597634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 02:00:08.912106 1597634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 02:00:08.915947 1597634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 02:00:08.919140 1597634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 02:00:08.922303 1597634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 02:00:08.926098 1597634 config.go:182] Loaded profile config "no-preload-970975": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 02:00:08.926241 1597634 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 02:00:08.959056 1597634 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 02:00:08.959190 1597634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 02:00:09.015806 1597634 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 02:00:09.00588266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 02:00:09.015939 1597634 docker.go:319] overlay module found
	I1218 02:00:09.019223 1597634 out.go:179] * Using the docker driver based on user configuration
	I1218 02:00:09.022175 1597634 start.go:309] selected driver: docker
	I1218 02:00:09.022200 1597634 start.go:927] validating driver "docker" against <nil>
	I1218 02:00:09.022215 1597634 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 02:00:09.022976 1597634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 02:00:09.077491 1597634 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 02:00:09.068080669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 02:00:09.077647 1597634 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1218 02:00:09.077872 1597634 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1218 02:00:09.077904 1597634 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 02:00:09.080891 1597634 out.go:179] * Using Docker driver with root privileges
	I1218 02:00:09.083884 1597634 cni.go:84] Creating CNI manager for "bridge"
	I1218 02:00:09.083908 1597634 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 02:00:09.083990 1597634 start.go:353] cluster config:
	{Name:enable-default-cni-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-459533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 02:00:09.087149 1597634 out.go:179] * Starting "enable-default-cni-459533" primary control-plane node in "enable-default-cni-459533" cluster
	I1218 02:00:09.090022 1597634 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 02:00:09.093149 1597634 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1218 02:00:09.096282 1597634 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 02:00:09.096329 1597634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
	I1218 02:00:09.096340 1597634 cache.go:65] Caching tarball of preloaded images
	I1218 02:00:09.096387 1597634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 02:00:09.096426 1597634 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 02:00:09.096437 1597634 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1218 02:00:09.096547 1597634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/config.json ...
	I1218 02:00:09.096564 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/config.json: {Name:mk47cd9820f3112f9b4dfd91cef0a1ee5e468ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:09.116560 1597634 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1218 02:00:09.116585 1597634 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1218 02:00:09.116604 1597634 cache.go:243] Successfully downloaded all kic artifacts
	I1218 02:00:09.116703 1597634 start.go:360] acquireMachinesLock for enable-default-cni-459533: {Name:mkebf3f8c1d3f90852a947465264226d9bbdff0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 02:00:09.116871 1597634 start.go:364] duration metric: took 135.176µs to acquireMachinesLock for "enable-default-cni-459533"
	I1218 02:00:09.116902 1597634 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-459533 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 02:00:09.116989 1597634 start.go:125] createHost starting for "" (driver="docker")
	I1218 02:00:09.120559 1597634 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1218 02:00:09.120831 1597634 start.go:159] libmachine.API.Create for "enable-default-cni-459533" (driver="docker")
	I1218 02:00:09.120870 1597634 client.go:173] LocalClient.Create starting
	I1218 02:00:09.120933 1597634 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
	I1218 02:00:09.120982 1597634 main.go:143] libmachine: Decoding PEM data...
	I1218 02:00:09.121008 1597634 main.go:143] libmachine: Parsing certificate...
	I1218 02:00:09.121066 1597634 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
	I1218 02:00:09.121086 1597634 main.go:143] libmachine: Decoding PEM data...
	I1218 02:00:09.121098 1597634 main.go:143] libmachine: Parsing certificate...
	I1218 02:00:09.121472 1597634 cli_runner.go:164] Run: docker network inspect enable-default-cni-459533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 02:00:09.137771 1597634 cli_runner.go:211] docker network inspect enable-default-cni-459533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 02:00:09.137868 1597634 network_create.go:284] running [docker network inspect enable-default-cni-459533] to gather additional debugging logs...
	I1218 02:00:09.137890 1597634 cli_runner.go:164] Run: docker network inspect enable-default-cni-459533
	W1218 02:00:09.162067 1597634 cli_runner.go:211] docker network inspect enable-default-cni-459533 returned with exit code 1
	I1218 02:00:09.162096 1597634 network_create.go:287] error running [docker network inspect enable-default-cni-459533]: docker network inspect enable-default-cni-459533: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-459533 not found
	I1218 02:00:09.162126 1597634 network_create.go:289] output of [docker network inspect enable-default-cni-459533]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-459533 not found
	
	** /stderr **
	I1218 02:00:09.162230 1597634 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 02:00:09.182245 1597634 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
	I1218 02:00:09.182614 1597634 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-687fba22ee0f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:82:88:15:4b:79:13} reservation:<nil>}
	I1218 02:00:09.182864 1597634 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb17cfebd2dd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:ff:26:b3:7d:81} reservation:<nil>}
	I1218 02:00:09.183154 1597634 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3ab8f39244bc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:a9:2b:d7:5d:ce} reservation:<nil>}
	I1218 02:00:09.183579 1597634 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e8e60}
	I1218 02:00:09.183598 1597634 network_create.go:124] attempt to create docker network enable-default-cni-459533 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1218 02:00:09.183659 1597634 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-459533 enable-default-cni-459533
	I1218 02:00:09.248363 1597634 network_create.go:108] docker network enable-default-cni-459533 192.168.85.0/24 created
	I1218 02:00:09.248395 1597634 kic.go:121] calculated static IP "192.168.85.2" for the "enable-default-cni-459533" container
	I1218 02:00:09.248478 1597634 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 02:00:09.264933 1597634 cli_runner.go:164] Run: docker volume create enable-default-cni-459533 --label name.minikube.sigs.k8s.io=enable-default-cni-459533 --label created_by.minikube.sigs.k8s.io=true
	I1218 02:00:09.283606 1597634 oci.go:103] Successfully created a docker volume enable-default-cni-459533
	I1218 02:00:09.283713 1597634 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-459533-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-459533 --entrypoint /usr/bin/test -v enable-default-cni-459533:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1218 02:00:09.854956 1597634 oci.go:107] Successfully prepared a docker volume enable-default-cni-459533
	I1218 02:00:09.855030 1597634 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 02:00:09.855053 1597634 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 02:00:09.855119 1597634 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-459533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 02:00:13.949384 1597634 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-459533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.094204461s)
	I1218 02:00:13.949415 1597634 kic.go:203] duration metric: took 4.094369192s to extract preloaded images to volume ...
	W1218 02:00:13.949559 1597634 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 02:00:13.949676 1597634 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 02:00:14.008379 1597634 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-459533 --name enable-default-cni-459533 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-459533 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-459533 --network enable-default-cni-459533 --ip 192.168.85.2 --volume enable-default-cni-459533:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1218 02:00:14.324233 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Running}}
	I1218 02:00:14.349115 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Status}}
	I1218 02:00:14.372884 1597634 cli_runner.go:164] Run: docker exec enable-default-cni-459533 stat /var/lib/dpkg/alternatives/iptables
	I1218 02:00:14.434779 1597634 oci.go:144] the created container "enable-default-cni-459533" has a running status.
	I1218 02:00:14.434808 1597634 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa...
	I1218 02:00:14.535951 1597634 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 02:00:14.559337 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Status}}
	I1218 02:00:14.582680 1597634 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 02:00:14.582699 1597634 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-459533 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 02:00:14.635563 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Status}}
	I1218 02:00:14.656183 1597634 machine.go:94] provisionDockerMachine start ...
	I1218 02:00:14.656418 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:14.682470 1597634 main.go:143] libmachine: Using SSH client type: native
	I1218 02:00:14.682833 1597634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1218 02:00:14.682844 1597634 main.go:143] libmachine: About to run SSH command:
	hostname
	I1218 02:00:14.683412 1597634 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33892->127.0.0.1:34242: read: connection reset by peer
	I1218 02:00:17.840303 1597634 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-459533
	
	I1218 02:00:17.840330 1597634 ubuntu.go:182] provisioning hostname "enable-default-cni-459533"
	I1218 02:00:17.840425 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:17.858132 1597634 main.go:143] libmachine: Using SSH client type: native
	I1218 02:00:17.858464 1597634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1218 02:00:17.858482 1597634 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-459533 && echo "enable-default-cni-459533" | sudo tee /etc/hostname
	I1218 02:00:18.024676 1597634 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-459533
	
	I1218 02:00:18.024761 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:18.046245 1597634 main.go:143] libmachine: Using SSH client type: native
	I1218 02:00:18.046569 1597634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34242 <nil> <nil>}
	I1218 02:00:18.046591 1597634 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-459533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-459533/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-459533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 02:00:18.209070 1597634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1218 02:00:18.209144 1597634 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
	I1218 02:00:18.209180 1597634 ubuntu.go:190] setting up certificates
	I1218 02:00:18.209211 1597634 provision.go:84] configureAuth start
	I1218 02:00:18.209288 1597634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-459533
	I1218 02:00:18.226837 1597634 provision.go:143] copyHostCerts
	I1218 02:00:18.226908 1597634 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
	I1218 02:00:18.226918 1597634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
	I1218 02:00:18.226997 1597634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
	I1218 02:00:18.227103 1597634 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
	I1218 02:00:18.227108 1597634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
	I1218 02:00:18.227135 1597634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
	I1218 02:00:18.227194 1597634 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
	I1218 02:00:18.227199 1597634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
	I1218 02:00:18.227241 1597634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
	I1218 02:00:18.227308 1597634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-459533 san=[127.0.0.1 192.168.85.2 enable-default-cni-459533 localhost minikube]
	I1218 02:00:18.639681 1597634 provision.go:177] copyRemoteCerts
	I1218 02:00:18.639753 1597634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 02:00:18.639799 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:18.662080 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:18.768925 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 02:00:18.787613 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1218 02:00:18.805875 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 02:00:18.823521 1597634 provision.go:87] duration metric: took 614.282895ms to configureAuth
	I1218 02:00:18.823549 1597634 ubuntu.go:206] setting minikube options for container-runtime
	I1218 02:00:18.823750 1597634 config.go:182] Loaded profile config "enable-default-cni-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 02:00:18.823763 1597634 machine.go:97] duration metric: took 4.167542589s to provisionDockerMachine
	I1218 02:00:18.823772 1597634 client.go:176] duration metric: took 9.702895686s to LocalClient.Create
	I1218 02:00:18.823792 1597634 start.go:167] duration metric: took 9.702963329s to libmachine.API.Create "enable-default-cni-459533"
	I1218 02:00:18.823801 1597634 start.go:293] postStartSetup for "enable-default-cni-459533" (driver="docker")
	I1218 02:00:18.823811 1597634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 02:00:18.823876 1597634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 02:00:18.823921 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:18.842637 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:18.953064 1597634 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 02:00:18.956800 1597634 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 02:00:18.956833 1597634 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1218 02:00:18.956845 1597634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
	I1218 02:00:18.956906 1597634 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
	I1218 02:00:18.956994 1597634 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
	I1218 02:00:18.957100 1597634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 02:00:18.965302 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 02:00:18.983842 1597634 start.go:296] duration metric: took 160.026185ms for postStartSetup
	I1218 02:00:18.984281 1597634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-459533
	I1218 02:00:19.002752 1597634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/config.json ...
	I1218 02:00:19.003117 1597634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 02:00:19.003187 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:19.022753 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:19.130020 1597634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 02:00:19.135153 1597634 start.go:128] duration metric: took 10.018138819s to createHost
	I1218 02:00:19.135181 1597634 start.go:83] releasing machines lock for "enable-default-cni-459533", held for 10.018295771s
	I1218 02:00:19.135254 1597634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-459533
	I1218 02:00:19.152797 1597634 ssh_runner.go:195] Run: cat /version.json
	I1218 02:00:19.152808 1597634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 02:00:19.152850 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:19.152878 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:19.174677 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:19.180772 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:19.375848 1597634 ssh_runner.go:195] Run: systemctl --version
	I1218 02:00:19.382590 1597634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1218 02:00:19.387339 1597634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 02:00:19.387410 1597634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 02:00:19.416307 1597634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1218 02:00:19.416334 1597634 start.go:496] detecting cgroup driver to use...
	I1218 02:00:19.416368 1597634 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1218 02:00:19.416422 1597634 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 02:00:19.432446 1597634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 02:00:19.445799 1597634 docker.go:218] disabling cri-docker service (if available) ...
	I1218 02:00:19.445884 1597634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 02:00:19.463271 1597634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 02:00:19.482168 1597634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 02:00:19.606995 1597634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 02:00:19.759786 1597634 docker.go:234] disabling docker service ...
	I1218 02:00:19.759882 1597634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 02:00:19.782745 1597634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 02:00:19.795935 1597634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 02:00:19.916089 1597634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 02:00:20.044204 1597634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 02:00:20.060010 1597634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 02:00:20.077485 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1218 02:00:20.087673 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 02:00:20.098006 1597634 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 02:00:20.098112 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 02:00:20.107956 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 02:00:20.118733 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 02:00:20.128384 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 02:00:20.138170 1597634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 02:00:20.146938 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 02:00:20.156320 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1218 02:00:20.165835 1597634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1218 02:00:20.175377 1597634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 02:00:20.183627 1597634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 02:00:20.191837 1597634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 02:00:20.304208 1597634 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 02:00:20.445761 1597634 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1218 02:00:20.445880 1597634 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1218 02:00:20.452801 1597634 start.go:564] Will wait 60s for crictl version
	I1218 02:00:20.452866 1597634 ssh_runner.go:195] Run: which crictl
	I1218 02:00:20.456964 1597634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1218 02:00:20.482122 1597634 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1218 02:00:20.482234 1597634 ssh_runner.go:195] Run: containerd --version
	I1218 02:00:20.503323 1597634 ssh_runner.go:195] Run: containerd --version
	I1218 02:00:20.531799 1597634 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1218 02:00:20.534798 1597634 cli_runner.go:164] Run: docker network inspect enable-default-cni-459533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 02:00:20.552397 1597634 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1218 02:00:20.556435 1597634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 02:00:20.566089 1597634 kubeadm.go:884] updating cluster {Name:enable-default-cni-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-459533 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1218 02:00:20.566203 1597634 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1218 02:00:20.566268 1597634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 02:00:20.591368 1597634 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 02:00:20.591392 1597634 containerd.go:534] Images already preloaded, skipping extraction
	I1218 02:00:20.591461 1597634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 02:00:20.622183 1597634 containerd.go:627] all images are preloaded for containerd runtime.
	I1218 02:00:20.622256 1597634 cache_images.go:86] Images are preloaded, skipping loading
	I1218 02:00:20.622280 1597634 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 containerd true true} ...
	I1218 02:00:20.622393 1597634 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-459533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-459533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1218 02:00:20.622485 1597634 ssh_runner.go:195] Run: sudo crictl info
	I1218 02:00:20.648997 1597634 cni.go:84] Creating CNI manager for "bridge"
	I1218 02:00:20.649027 1597634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1218 02:00:20.649049 1597634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-459533 NodeName:enable-default-cni-459533 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 02:00:20.649167 1597634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-459533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 02:00:20.649236 1597634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1218 02:00:20.657319 1597634 binaries.go:51] Found k8s binaries, skipping transfer
	I1218 02:00:20.657446 1597634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 02:00:20.665551 1597634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1218 02:00:20.679448 1597634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 02:00:20.693596 1597634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1218 02:00:20.711694 1597634 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1218 02:00:20.716727 1597634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 02:00:20.727086 1597634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 02:00:20.850017 1597634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 02:00:20.867507 1597634 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533 for IP: 192.168.85.2
	I1218 02:00:20.867531 1597634 certs.go:195] generating shared ca certs ...
	I1218 02:00:20.867548 1597634 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:20.867773 1597634 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
	I1218 02:00:20.867850 1597634 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
	I1218 02:00:20.867865 1597634 certs.go:257] generating profile certs ...
	I1218 02:00:20.867955 1597634 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/client.key
	I1218 02:00:20.867994 1597634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/client.crt with IP's: []
	I1218 02:00:21.243145 1597634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/client.crt ...
	I1218 02:00:21.243182 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/client.crt: {Name:mkc48e05b513c55ff26a5533132b7ae45590daa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:21.243431 1597634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/client.key ...
	I1218 02:00:21.243451 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/client.key: {Name:mk803d91133a218fc0ab09eadd7dda68f23b3041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:21.243556 1597634 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.key.fde01e72
	I1218 02:00:21.243579 1597634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.crt.fde01e72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1218 02:00:21.411762 1597634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.crt.fde01e72 ...
	I1218 02:00:21.411800 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.crt.fde01e72: {Name:mk79a4928750ba8d821f460f1a82e44280d98694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:21.411993 1597634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.key.fde01e72 ...
	I1218 02:00:21.412008 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.key.fde01e72: {Name:mkcd4a093261628565c64c7511287d37006ef0c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:21.412103 1597634 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.crt.fde01e72 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.crt
	I1218 02:00:21.412180 1597634 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.key.fde01e72 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.key
	I1218 02:00:21.412243 1597634 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.key
	I1218 02:00:21.412262 1597634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.crt with IP's: []
	I1218 02:00:21.614693 1597634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.crt ...
	I1218 02:00:21.614727 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.crt: {Name:mkaadd8db6dfcaabdd62c4ffb995be782412d05b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:21.614921 1597634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.key ...
	I1218 02:00:21.614935 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.key: {Name:mk22d8c9aff620e51c13ab22bafbd3913edfa8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:21.615147 1597634 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
	W1218 02:00:21.615196 1597634 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
	I1218 02:00:21.615205 1597634 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 02:00:21.615235 1597634 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
	I1218 02:00:21.615265 1597634 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
	I1218 02:00:21.615293 1597634 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
	I1218 02:00:21.615345 1597634 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
	I1218 02:00:21.615947 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 02:00:21.636672 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 02:00:21.660058 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 02:00:21.681732 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 02:00:21.705628 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1218 02:00:21.725007 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 02:00:21.742947 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 02:00:21.761465 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/enable-default-cni-459533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 02:00:21.779765 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
	I1218 02:00:21.797911 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 02:00:21.816753 1597634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
	I1218 02:00:21.835409 1597634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 02:00:21.848396 1597634 ssh_runner.go:195] Run: openssl version
	I1218 02:00:21.855146 1597634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
	I1218 02:00:21.862792 1597634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
	I1218 02:00:21.870359 1597634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
	I1218 02:00:21.874283 1597634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
	I1218 02:00:21.874409 1597634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
	I1218 02:00:21.916340 1597634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1218 02:00:21.924220 1597634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
	I1218 02:00:21.931774 1597634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
	I1218 02:00:21.939686 1597634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
	I1218 02:00:21.947672 1597634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
	I1218 02:00:21.952384 1597634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
	I1218 02:00:21.952502 1597634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
	I1218 02:00:21.994661 1597634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1218 02:00:22.003250 1597634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
	I1218 02:00:22.012261 1597634 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1218 02:00:22.020895 1597634 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1218 02:00:22.029337 1597634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 02:00:22.033751 1597634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
	I1218 02:00:22.033868 1597634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 02:00:22.077040 1597634 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1218 02:00:22.085152 1597634 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1218 02:00:22.093025 1597634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1218 02:00:22.097136 1597634 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1218 02:00:22.097247 1597634 kubeadm.go:401] StartCluster: {Name:enable-default-cni-459533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:enable-default-cni-459533 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 02:00:22.097350 1597634 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1218 02:00:22.097417 1597634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 02:00:22.124381 1597634 cri.go:89] found id: ""
	I1218 02:00:22.124477 1597634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 02:00:22.132689 1597634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 02:00:22.140750 1597634 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1218 02:00:22.140831 1597634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 02:00:22.149139 1597634 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 02:00:22.149170 1597634 kubeadm.go:158] found existing configuration files:
	
	I1218 02:00:22.149227 1597634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1218 02:00:22.157328 1597634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1218 02:00:22.157396 1597634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1218 02:00:22.165123 1597634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1218 02:00:22.173100 1597634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1218 02:00:22.173185 1597634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1218 02:00:22.180923 1597634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1218 02:00:22.188782 1597634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1218 02:00:22.188884 1597634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1218 02:00:22.196577 1597634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1218 02:00:22.204754 1597634 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1218 02:00:22.204824 1597634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1218 02:00:22.212243 1597634 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 02:00:22.254364 1597634 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1218 02:00:22.254680 1597634 kubeadm.go:319] [preflight] Running pre-flight checks
	I1218 02:00:22.280320 1597634 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1218 02:00:22.280463 1597634 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1218 02:00:22.280540 1597634 kubeadm.go:319] OS: Linux
	I1218 02:00:22.280619 1597634 kubeadm.go:319] CGROUPS_CPU: enabled
	I1218 02:00:22.280747 1597634 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1218 02:00:22.280828 1597634 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1218 02:00:22.280919 1597634 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1218 02:00:22.280996 1597634 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1218 02:00:22.281068 1597634 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1218 02:00:22.281141 1597634 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1218 02:00:22.281213 1597634 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1218 02:00:22.281286 1597634 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1218 02:00:22.362903 1597634 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 02:00:22.363075 1597634 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 02:00:22.363222 1597634 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1218 02:00:22.373063 1597634 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 02:00:22.380161 1597634 out.go:252]   - Generating certificates and keys ...
	I1218 02:00:22.380477 1597634 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1218 02:00:22.380605 1597634 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1218 02:00:22.544760 1597634 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 02:00:23.891791 1597634 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1218 02:00:24.616711 1597634 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1218 02:00:25.716257 1597634 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1218 02:00:26.017985 1597634 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1218 02:00:26.018449 1597634 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-459533 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 02:00:26.488653 1597634 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1218 02:00:26.489039 1597634 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-459533 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1218 02:00:26.862485 1597634 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 02:00:27.930887 1597634 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 02:00:28.104080 1597634 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1218 02:00:28.104468 1597634 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 02:00:28.875099 1597634 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 02:00:29.379525 1597634 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1218 02:00:29.724392 1597634 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 02:00:29.850468 1597634 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 02:00:30.164664 1597634 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 02:00:30.164765 1597634 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 02:00:30.167920 1597634 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 02:00:30.174626 1597634 out.go:252]   - Booting up control plane ...
	I1218 02:00:30.174741 1597634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 02:00:30.175321 1597634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 02:00:30.180608 1597634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 02:00:30.197263 1597634 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 02:00:30.197380 1597634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1218 02:00:30.207504 1597634 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1218 02:00:30.207608 1597634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 02:00:30.207650 1597634 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1218 02:00:30.346953 1597634 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1218 02:00:30.347078 1597634 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1218 02:00:31.852813 1597634 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.502068882s
	I1218 02:00:31.852935 1597634 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1218 02:00:31.853024 1597634 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1218 02:00:31.853123 1597634 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1218 02:00:31.853207 1597634 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1218 02:00:36.805505 1597634 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.952953742s
	I1218 02:00:37.592937 1597634 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.740773702s
	I1218 02:00:38.854307 1597634 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.00189817s
	I1218 02:00:38.886657 1597634 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 02:00:38.906710 1597634 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 02:00:38.923224 1597634 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 02:00:38.923442 1597634 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-459533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 02:00:38.935667 1597634 kubeadm.go:319] [bootstrap-token] Using token: 4z0oiu.jtmtytpnj44skhn3
	I1218 02:00:38.938581 1597634 out.go:252]   - Configuring RBAC rules ...
	I1218 02:00:38.938709 1597634 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 02:00:38.945188 1597634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 02:00:38.953929 1597634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 02:00:38.962019 1597634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 02:00:38.967105 1597634 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 02:00:38.976558 1597634 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 02:00:39.260950 1597634 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 02:00:39.722818 1597634 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1218 02:00:40.261256 1597634 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1218 02:00:40.262668 1597634 kubeadm.go:319] 
	I1218 02:00:40.262743 1597634 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1218 02:00:40.262748 1597634 kubeadm.go:319] 
	I1218 02:00:40.262834 1597634 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1218 02:00:40.262840 1597634 kubeadm.go:319] 
	I1218 02:00:40.262873 1597634 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1218 02:00:40.262933 1597634 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 02:00:40.262984 1597634 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 02:00:40.262988 1597634 kubeadm.go:319] 
	I1218 02:00:40.263043 1597634 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1218 02:00:40.263046 1597634 kubeadm.go:319] 
	I1218 02:00:40.263094 1597634 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 02:00:40.263097 1597634 kubeadm.go:319] 
	I1218 02:00:40.263149 1597634 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1218 02:00:40.263224 1597634 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 02:00:40.263292 1597634 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 02:00:40.263296 1597634 kubeadm.go:319] 
	I1218 02:00:40.263381 1597634 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 02:00:40.263458 1597634 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1218 02:00:40.263462 1597634 kubeadm.go:319] 
	I1218 02:00:40.263547 1597634 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4z0oiu.jtmtytpnj44skhn3 \
	I1218 02:00:40.263651 1597634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b4077e98a4859192b0456bf3327d2197d85ea7f70e768b14f3ff5e295e626e \
	I1218 02:00:40.263671 1597634 kubeadm.go:319] 	--control-plane 
	I1218 02:00:40.263675 1597634 kubeadm.go:319] 
	I1218 02:00:40.263766 1597634 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1218 02:00:40.263771 1597634 kubeadm.go:319] 
	I1218 02:00:40.263853 1597634 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4z0oiu.jtmtytpnj44skhn3 \
	I1218 02:00:40.263957 1597634 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:98b4077e98a4859192b0456bf3327d2197d85ea7f70e768b14f3ff5e295e626e 
	I1218 02:00:40.269098 1597634 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1218 02:00:40.269327 1597634 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1218 02:00:40.269432 1597634 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 02:00:40.269448 1597634 cni.go:84] Creating CNI manager for "bridge"
	I1218 02:00:40.272532 1597634 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1218 02:00:40.275553 1597634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1218 02:00:40.284315 1597634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1218 02:00:40.297507 1597634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 02:00:40.297630 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:40.297703 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-459533 minikube.k8s.io/updated_at=2025_12_18T02_00_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=enable-default-cni-459533 minikube.k8s.io/primary=true
	I1218 02:00:40.460524 1597634 ops.go:34] apiserver oom_adj: -16
	I1218 02:00:40.460666 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:40.960866 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:41.461368 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:41.961356 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:42.461517 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:42.960752 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:43.461773 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:43.960897 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:44.461125 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:44.961373 1597634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 02:00:45.219655 1597634 kubeadm.go:1114] duration metric: took 4.922066821s to wait for elevateKubeSystemPrivileges
	I1218 02:00:45.219710 1597634 kubeadm.go:403] duration metric: took 23.122468537s to StartCluster
	I1218 02:00:45.219732 1597634 settings.go:142] acquiring lock: {Name:mk5aaf2d4a9cfc7311e7021513d87b9ed3c7bc9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:45.219806 1597634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 02:00:45.221011 1597634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/kubeconfig: {Name:mk9a3e2123ec0ecba3248ff05458b66356801c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 02:00:45.221300 1597634 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1218 02:00:45.221441 1597634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 02:00:45.221738 1597634 config.go:182] Loaded profile config "enable-default-cni-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 02:00:45.221796 1597634 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1218 02:00:45.221871 1597634 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-459533"
	I1218 02:00:45.221894 1597634 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-459533"
	I1218 02:00:45.221926 1597634 host.go:66] Checking if "enable-default-cni-459533" exists ...
	I1218 02:00:45.222951 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Status}}
	I1218 02:00:45.223178 1597634 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-459533"
	I1218 02:00:45.223209 1597634 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-459533"
	I1218 02:00:45.223517 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Status}}
	I1218 02:00:45.225849 1597634 out.go:179] * Verifying Kubernetes components...
	I1218 02:00:45.228862 1597634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 02:00:45.290929 1597634 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-459533"
	I1218 02:00:45.296760 1597634 host.go:66] Checking if "enable-default-cni-459533" exists ...
	I1218 02:00:45.297449 1597634 cli_runner.go:164] Run: docker container inspect enable-default-cni-459533 --format={{.State.Status}}
	I1218 02:00:45.306037 1597634 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 02:00:45.309425 1597634 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 02:00:45.309458 1597634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 02:00:45.309542 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:45.340215 1597634 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 02:00:45.340236 1597634 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 02:00:45.340306 1597634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-459533
	I1218 02:00:45.361120 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:45.382271 1597634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/enable-default-cni-459533/id_rsa Username:docker}
	I1218 02:00:45.627786 1597634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 02:00:45.627954 1597634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1218 02:00:45.653367 1597634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 02:00:45.664695 1597634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 02:00:46.353633 1597634 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-459533" to be "Ready" ...
	I1218 02:00:46.354510 1597634 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1218 02:00:46.394416 1597634 node_ready.go:49] node "enable-default-cni-459533" is "Ready"
	I1218 02:00:46.394450 1597634 node_ready.go:38] duration metric: took 40.783602ms for node "enable-default-cni-459533" to be "Ready" ...
	I1218 02:00:46.394466 1597634 api_server.go:52] waiting for apiserver process to appear ...
	I1218 02:00:46.394521 1597634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 02:00:46.830415 1597634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.176969213s)
	I1218 02:00:46.830473 1597634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.165707131s)
	I1218 02:00:46.830701 1597634 api_server.go:72] duration metric: took 1.609366064s to wait for apiserver process to appear ...
	I1218 02:00:46.830711 1597634 api_server.go:88] waiting for apiserver healthz status ...
	I1218 02:00:46.830726 1597634 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1218 02:00:46.847911 1597634 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1218 02:00:46.849246 1597634 api_server.go:141] control plane version: v1.34.3
	I1218 02:00:46.849338 1597634 api_server.go:131] duration metric: took 18.620572ms to wait for apiserver health ...
	I1218 02:00:46.849363 1597634 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 02:00:46.857770 1597634 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1218 02:00:46.860666 1597634 addons.go:530] duration metric: took 1.638861673s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1218 02:00:46.863654 1597634 system_pods.go:59] 8 kube-system pods found
	I1218 02:00:46.863689 1597634 system_pods.go:61] "coredns-66bc5c9577-4zknc" [48b94032-6063-4958-ab97-928f8d4a281f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 02:00:46.863702 1597634 system_pods.go:61] "coredns-66bc5c9577-9lt5m" [a624bb15-f043-4e8a-8c1e-535f474b627a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 02:00:46.863714 1597634 system_pods.go:61] "etcd-enable-default-cni-459533" [4d467375-4af8-4f23-a9fe-c55142a5b4a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1218 02:00:46.863722 1597634 system_pods.go:61] "kube-apiserver-enable-default-cni-459533" [40b7345b-e176-4d4b-a8c8-75ffb020e4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1218 02:00:46.863730 1597634 system_pods.go:61] "kube-controller-manager-enable-default-cni-459533" [2fb4caec-2996-4097-b9ea-27833764563c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1218 02:00:46.863734 1597634 system_pods.go:61] "kube-proxy-pxwvz" [aad7d42a-7ba4-4d05-af89-e9d619204cd1] Running
	I1218 02:00:46.863739 1597634 system_pods.go:61] "kube-scheduler-enable-default-cni-459533" [8086c809-e07c-41b4-929a-7f6ddd8b69ff] Running
	I1218 02:00:46.863743 1597634 system_pods.go:61] "storage-provisioner" [b928a561-3991-41d4-8d08-9de63bc11882] Pending
	I1218 02:00:46.863749 1597634 system_pods.go:74] duration metric: took 14.368908ms to wait for pod list to return data ...
	I1218 02:00:46.863756 1597634 default_sa.go:34] waiting for default service account to be created ...
	I1218 02:00:46.865719 1597634 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-459533" context rescaled to 1 replicas
	I1218 02:00:46.873255 1597634 default_sa.go:45] found service account: "default"
	I1218 02:00:46.873279 1597634 default_sa.go:55] duration metric: took 9.516742ms for default service account to be created ...
	I1218 02:00:46.873290 1597634 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 02:00:46.878089 1597634 system_pods.go:86] 8 kube-system pods found
	I1218 02:00:46.878123 1597634 system_pods.go:89] "coredns-66bc5c9577-4zknc" [48b94032-6063-4958-ab97-928f8d4a281f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 02:00:46.878138 1597634 system_pods.go:89] "coredns-66bc5c9577-9lt5m" [a624bb15-f043-4e8a-8c1e-535f474b627a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 02:00:46.878162 1597634 system_pods.go:89] "etcd-enable-default-cni-459533" [4d467375-4af8-4f23-a9fe-c55142a5b4a9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1218 02:00:46.878169 1597634 system_pods.go:89] "kube-apiserver-enable-default-cni-459533" [40b7345b-e176-4d4b-a8c8-75ffb020e4e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1218 02:00:46.878177 1597634 system_pods.go:89] "kube-controller-manager-enable-default-cni-459533" [2fb4caec-2996-4097-b9ea-27833764563c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1218 02:00:46.878182 1597634 system_pods.go:89] "kube-proxy-pxwvz" [aad7d42a-7ba4-4d05-af89-e9d619204cd1] Running
	I1218 02:00:46.878187 1597634 system_pods.go:89] "kube-scheduler-enable-default-cni-459533" [8086c809-e07c-41b4-929a-7f6ddd8b69ff] Running
	I1218 02:00:46.878191 1597634 system_pods.go:89] "storage-provisioner" [b928a561-3991-41d4-8d08-9de63bc11882] Pending
	I1218 02:00:46.878197 1597634 system_pods.go:126] duration metric: took 4.902003ms to wait for k8s-apps to be running ...
	I1218 02:00:46.878204 1597634 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 02:00:46.878268 1597634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 02:00:46.897082 1597634 system_svc.go:56] duration metric: took 18.866555ms WaitForService to wait for kubelet
	I1218 02:00:46.897165 1597634 kubeadm.go:587] duration metric: took 1.675829369s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 02:00:46.897200 1597634 node_conditions.go:102] verifying NodePressure condition ...
	I1218 02:00:46.900608 1597634 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 02:00:46.900700 1597634 node_conditions.go:123] node cpu capacity is 2
	I1218 02:00:46.900738 1597634 node_conditions.go:105] duration metric: took 3.504717ms to run NodePressure ...
	I1218 02:00:46.900777 1597634 start.go:242] waiting for startup goroutines ...
	I1218 02:00:46.900800 1597634 start.go:247] waiting for cluster config update ...
	I1218 02:00:46.900823 1597634 start.go:256] writing updated cluster config ...
	I1218 02:00:46.901174 1597634 ssh_runner.go:195] Run: rm -f paused
	I1218 02:00:46.905176 1597634 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1218 02:00:46.909324 1597634 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4zknc" in "kube-system" namespace to be "Ready" or be gone ...
	W1218 02:00:48.915721 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:00:50.915980 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:00:53.415811 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:00:55.417495 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:00:57.915889 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:01:00.416541 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:01:02.914806 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:01:04.915025 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:01:06.915818 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:01:09.416364 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	W1218 02:01:11.915095 1597634 pod_ready.go:104] pod "coredns-66bc5c9577-4zknc" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343365892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343381514Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343418092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343433542Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343443264Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343454948Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343463957Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343476125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343492305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343522483Z" level=info msg="Connect containerd service"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.343787182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.344338751Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359530690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359745094Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.359671930Z" level=info msg="Start subscribing containerd event"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.365773580Z" level=info msg="Start recovering state"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383747116Z" level=info msg="Start event monitor"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383803385Z" level=info msg="Start cni network conf syncer for default"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383814093Z" level=info msg="Start streaming server"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383824997Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383833907Z" level=info msg="runtime interface starting up..."
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383841612Z" level=info msg="starting plugins..."
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.383874005Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 18 01:41:23 no-preload-970975 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 18 01:41:23 no-preload-970975 containerd[555]: time="2025-12-18T01:41:23.385843444Z" level=info msg="containerd successfully booted in 0.065726s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1218 02:01:17.119630   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 02:01:17.120256   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 02:01:17.121848   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 02:01:17.122532   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1218 02:01:17.124053   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:01:17 up  8:43,  0 user,  load average: 1.55, 1.79, 1.58
	Linux no-preload-970975 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 18 02:01:13 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 02:01:14 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1585.
	Dec 18 02:01:14 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:14 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:14 no-preload-970975 kubelet[10216]: E1218 02:01:14.692986   10216 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 02:01:14 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 02:01:14 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 02:01:15 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1586.
	Dec 18 02:01:15 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:15 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:15 no-preload-970975 kubelet[10222]: E1218 02:01:15.438028   10222 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 02:01:15 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 02:01:15 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 02:01:16 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1587.
	Dec 18 02:01:16 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:16 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:16 no-preload-970975 kubelet[10242]: E1218 02:01:16.209746   10242 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 02:01:16 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 02:01:16 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 18 02:01:16 no-preload-970975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1588.
	Dec 18 02:01:16 no-preload-970975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:16 no-preload-970975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 18 02:01:16 no-preload-970975 kubelet[10307]: E1218 02:01:16.962042   10307 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 18 02:01:16 no-preload-970975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 18 02:01:16 no-preload-970975 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-970975 -n no-preload-970975: exit status 2 (340.570235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-970975" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (287.94s)
E1218 02:03:03.657908 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/calico-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:03:08.382407 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:03:13.900295 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/calico-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:03:25.214514 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.3/json-events 3.66
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.2
18 TestDownloadOnly/v1.34.3/DeleteAll 0.32
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.24
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.93
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 138.1
38 TestAddons/serial/Volcano 41.68
40 TestAddons/serial/GCPAuth/Namespaces 0.2
41 TestAddons/serial/GCPAuth/FakeCredentials 10.02
44 TestAddons/parallel/Registry 16.66
45 TestAddons/parallel/RegistryCreds 0.78
46 TestAddons/parallel/Ingress 20.21
47 TestAddons/parallel/InspektorGadget 11.82
48 TestAddons/parallel/MetricsServer 5.86
50 TestAddons/parallel/CSI 59.44
51 TestAddons/parallel/Headlamp 16.13
52 TestAddons/parallel/CloudSpanner 5.65
53 TestAddons/parallel/LocalPath 8.65
54 TestAddons/parallel/NvidiaDevicePlugin 6.7
55 TestAddons/parallel/Yakd 11.92
57 TestAddons/StoppedEnableDisable 12.38
58 TestCertOptions 46.1
59 TestCertExpiration 222.6
61 TestForceSystemdFlag 35.42
62 TestForceSystemdEnv 32.72
63 TestDockerEnvContainerd 48.08
67 TestErrorSpam/setup 31.54
68 TestErrorSpam/start 0.82
69 TestErrorSpam/status 1.12
70 TestErrorSpam/pause 1.81
71 TestErrorSpam/unpause 1.92
72 TestErrorSpam/stop 1.81
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 52.71
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.34
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
84 TestFunctional/serial/CacheCmd/cache/add_local 1.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 51.57
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.48
95 TestFunctional/serial/LogsFileCmd 1.56
96 TestFunctional/serial/InvalidService 4.72
98 TestFunctional/parallel/ConfigCmd 0.56
99 TestFunctional/parallel/DashboardCmd 7.06
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.15
106 TestFunctional/parallel/ServiceCmdConnect 6.63
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 19.52
110 TestFunctional/parallel/SSHCmd 0.72
111 TestFunctional/parallel/CpCmd 2.09
113 TestFunctional/parallel/FileSync 0.39
114 TestFunctional/parallel/CertSync 2.55
118 TestFunctional/parallel/NodeLabels 0.13
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
122 TestFunctional/parallel/License 0.37
123 TestFunctional/parallel/Version/short 0.08
124 TestFunctional/parallel/Version/components 1.43
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.06
130 TestFunctional/parallel/ImageCommands/Setup 0.68
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.56
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.64
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
138 TestFunctional/parallel/ProfileCmd/profile_list 0.53
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.41
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
156 TestFunctional/parallel/MountCmd/any-port 8.45
157 TestFunctional/parallel/ServiceCmd/List 0.61
158 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
160 TestFunctional/parallel/ServiceCmd/Format 0.42
161 TestFunctional/parallel/ServiceCmd/URL 0.48
162 TestFunctional/parallel/MountCmd/specific-port 2.31
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.41
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.35
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.35
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.89
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.98
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.97
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.17
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.36
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 2.11
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.7
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.3
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.5
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.23
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.25
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.23
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.22
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.44
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.26
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.46
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.42
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.16
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.14
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.49
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.54
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.88
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.43
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.41
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.38
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.4
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.08
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.3
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.05
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 138.11
265 TestMultiControlPlane/serial/DeployApp 7.26
266 TestMultiControlPlane/serial/PingHostFromPods 1.84
267 TestMultiControlPlane/serial/AddWorkerNode 31.78
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
270 TestMultiControlPlane/serial/CopyFile 20.81
271 TestMultiControlPlane/serial/StopSecondaryNode 12.99
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.74
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.39
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.39
276 TestMultiControlPlane/serial/DeleteSecondaryNode 10.78
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.89
278 TestMultiControlPlane/serial/StopCluster 36.41
279 TestMultiControlPlane/serial/RestartCluster 67.89
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
281 TestMultiControlPlane/serial/AddSecondaryNode 56.23
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.15
287 TestJSONOutput/start/Command 52.07
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.77
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.66
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.97
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 39.42
313 TestKicCustomNetwork/use_default_bridge_network 36.64
314 TestKicExistingNetwork 36.37
315 TestKicCustomSubnet 33.54
316 TestKicStaticIP 35.8
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 67.85
321 TestMountStart/serial/StartWithMountFirst 8.17
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.37
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.73
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.91
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 78.83
333 TestMultiNode/serial/DeployApp2Nodes 5.29
334 TestMultiNode/serial/PingHostFrom2Pods 1
335 TestMultiNode/serial/AddNode 28.86
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.77
338 TestMultiNode/serial/CopyFile 10.75
339 TestMultiNode/serial/StopNode 2.42
340 TestMultiNode/serial/StartAfterStop 7.89
341 TestMultiNode/serial/RestartKeepsNodes 77.82
342 TestMultiNode/serial/DeleteNode 5.7
343 TestMultiNode/serial/StopMultiNode 24.09
344 TestMultiNode/serial/RestartMultiNode 47.55
345 TestMultiNode/serial/ValidateNameConflict 34.27
350 TestPreload 118.77
352 TestScheduledStopUnix 109.01
355 TestInsufficientStorage 12.73
356 TestRunningBinaryUpgrade 64.04
359 TestMissingContainerUpgrade 144.48
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 50.84
363 TestNoKubernetes/serial/StartWithStopK8s 9.88
364 TestNoKubernetes/serial/Start 8.19
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
367 TestNoKubernetes/serial/ProfileList 0.71
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 7.38
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
371 TestStoppedBinaryUpgrade/Setup 1.08
372 TestStoppedBinaryUpgrade/Upgrade 302
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.24
382 TestPause/serial/Start 51.03
383 TestPause/serial/SecondStartNoReconfiguration 6.46
384 TestPause/serial/Pause 0.73
385 TestPause/serial/VerifyStatus 0.35
386 TestPause/serial/Unpause 0.63
387 TestPause/serial/PauseAgain 0.89
388 TestPause/serial/DeletePaused 2.57
389 TestPause/serial/VerifyDeletedResources 0.42
397 TestNetworkPlugins/group/false 3.63
402 TestStartStop/group/old-k8s-version/serial/FirstStart 71.61
405 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
407 TestStartStop/group/old-k8s-version/serial/Stop 12.11
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
409 TestStartStop/group/old-k8s-version/serial/SecondStart 26.41
410 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11.01
411 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
412 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
413 TestStartStop/group/old-k8s-version/serial/Pause 3.25
415 TestStartStop/group/embed-certs/serial/FirstStart 52.9
416 TestStartStop/group/embed-certs/serial/DeployApp 8.32
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
418 TestStartStop/group/embed-certs/serial/Stop 12.06
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/embed-certs/serial/SecondStart 51.49
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
424 TestStartStop/group/embed-certs/serial/Pause 3.13
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.51
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.63
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
440 TestStartStop/group/no-preload/serial/Stop 1.29
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.31
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
453 TestNetworkPlugins/group/auto/Start 52.15
454 TestNetworkPlugins/group/auto/KubeletFlags 0.51
455 TestNetworkPlugins/group/auto/NetCatPod 11.27
456 TestNetworkPlugins/group/auto/DNS 0.23
457 TestNetworkPlugins/group/auto/Localhost 0.15
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 52.7
460 TestNetworkPlugins/group/kindnet/ControllerPod 6
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
462 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
463 TestNetworkPlugins/group/kindnet/DNS 0.26
464 TestNetworkPlugins/group/kindnet/Localhost 0.21
465 TestNetworkPlugins/group/kindnet/HairPin 0.2
467 TestNetworkPlugins/group/calico/Start 63.21
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.32
470 TestNetworkPlugins/group/calico/NetCatPod 11.46
471 TestNetworkPlugins/group/calico/DNS 0.22
472 TestNetworkPlugins/group/calico/Localhost 0.14
473 TestNetworkPlugins/group/calico/HairPin 0.18
474 TestNetworkPlugins/group/custom-flannel/Start 60.2
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
477 TestNetworkPlugins/group/custom-flannel/DNS 0.4
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
480 TestNetworkPlugins/group/enable-default-cni/Start 80.79
481 TestNetworkPlugins/group/flannel/Start 60.88
482 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
483 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.33
484 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
485 TestNetworkPlugins/group/enable-default-cni/Localhost 0.36
486 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
487 TestNetworkPlugins/group/bridge/Start 85.54
488 TestNetworkPlugins/group/flannel/ControllerPod 6.01
489 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
490 TestNetworkPlugins/group/flannel/NetCatPod 10.35
491 TestNetworkPlugins/group/flannel/DNS 0.24
492 TestNetworkPlugins/group/flannel/Localhost 0.19
493 TestNetworkPlugins/group/flannel/HairPin 0.24
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
495 TestNetworkPlugins/group/bridge/NetCatPod 9.28
496 TestNetworkPlugins/group/bridge/DNS 0.25
497 TestNetworkPlugins/group/bridge/Localhost 0.17
498 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-597540 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-597540 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.004010352s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1218 00:10:56.114124 1261148 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1218 00:10:56.114224 1261148 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-597540
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-597540: exit status 85 (99.996509ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-597540 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-597540 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:10:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:10:50.154973 1261153 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:10:50.155174 1261153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:10:50.155200 1261153 out.go:374] Setting ErrFile to fd 2...
	I1218 00:10:50.155220 1261153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:10:50.155499 1261153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	W1218 00:10:50.155673 1261153 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22186-1259289/.minikube/config/config.json: open /home/jenkins/minikube-integration/22186-1259289/.minikube/config/config.json: no such file or directory
	I1218 00:10:50.156179 1261153 out.go:368] Setting JSON to true
	I1218 00:10:50.157080 1261153 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24797,"bootTime":1765991854,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:10:50.157181 1261153 start.go:143] virtualization:  
	I1218 00:10:50.162857 1261153 out.go:99] [download-only-597540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1218 00:10:50.163123 1261153 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball: no such file or directory
	I1218 00:10:50.163282 1261153 notify.go:221] Checking for updates...
	I1218 00:10:50.167462 1261153 out.go:171] MINIKUBE_LOCATION=22186
	I1218 00:10:50.171044 1261153 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:10:50.174264 1261153 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:10:50.177377 1261153 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:10:50.181055 1261153 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 00:10:50.187034 1261153 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 00:10:50.187322 1261153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:10:50.217215 1261153 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:10:50.217343 1261153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:10:50.273339 1261153 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-18 00:10:50.264167064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:10:50.273444 1261153 docker.go:319] overlay module found
	I1218 00:10:50.276842 1261153 out.go:99] Using the docker driver based on user configuration
	I1218 00:10:50.276900 1261153 start.go:309] selected driver: docker
	I1218 00:10:50.276911 1261153 start.go:927] validating driver "docker" against <nil>
	I1218 00:10:50.277018 1261153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:10:50.334228 1261153 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-18 00:10:50.325166817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:10:50.334433 1261153 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1218 00:10:50.334732 1261153 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1218 00:10:50.334891 1261153 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 00:10:50.338286 1261153 out.go:171] Using Docker driver with root privileges
	I1218 00:10:50.341354 1261153 cni.go:84] Creating CNI manager for ""
	I1218 00:10:50.341426 1261153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1218 00:10:50.341439 1261153 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 00:10:50.341520 1261153 start.go:353] cluster config:
	{Name:download-only-597540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-597540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:10:50.344587 1261153 out.go:99] Starting "download-only-597540" primary control-plane node in "download-only-597540" cluster
	I1218 00:10:50.344615 1261153 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1218 00:10:50.347760 1261153 out.go:99] Pulling base image v0.0.48-1765966054-22186 ...
	I1218 00:10:50.347805 1261153 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1218 00:10:50.347862 1261153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1218 00:10:50.364533 1261153 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1218 00:10:50.364743 1261153 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1218 00:10:50.364858 1261153 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1218 00:10:50.399180 1261153 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1218 00:10:50.399209 1261153 cache.go:65] Caching tarball of preloaded images
	I1218 00:10:50.399397 1261153 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1218 00:10:50.402843 1261153 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1218 00:10:50.402877 1261153 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1218 00:10:50.482959 1261153 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1218 00:10:50.483093 1261153 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1218 00:10:54.946697 1261153 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1218 00:10:54.947080 1261153 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/download-only-597540/config.json ...
	I1218 00:10:54.947115 1261153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/download-only-597540/config.json: {Name:mkaf60182ed4d9a1091ba5712ac42dd9abebc98d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 00:10:54.950988 1261153 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1218 00:10:54.951330 1261153 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-597540 host does not exist
	  To start a cluster, run: "minikube start -p download-only-597540"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-597540
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-237988 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-237988 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.654995225s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1218 00:11:00.218894 1261148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
I1218 00:11:00.218953 1261148 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-237988
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-237988: exit status 85 (202.516549ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-597540 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-597540 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │ 18 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-597540                                                                                                                                                               │ download-only-597540 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │ 18 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-237988 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-237988 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:10:56
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:10:56.605825 1261356 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:10:56.606321 1261356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:10:56.606369 1261356 out.go:374] Setting ErrFile to fd 2...
	I1218 00:10:56.606390 1261356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:10:56.607129 1261356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:10:56.607771 1261356 out.go:368] Setting JSON to true
	I1218 00:10:56.608662 1261356 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24803,"bootTime":1765991854,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:10:56.608816 1261356 start.go:143] virtualization:  
	I1218 00:10:56.612382 1261356 out.go:99] [download-only-237988] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:10:56.612716 1261356 notify.go:221] Checking for updates...
	I1218 00:10:56.615468 1261356 out.go:171] MINIKUBE_LOCATION=22186
	I1218 00:10:56.618498 1261356 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:10:56.621390 1261356 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:10:56.624262 1261356 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:10:56.627491 1261356 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 00:10:56.633217 1261356 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 00:10:56.633526 1261356 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:10:56.657078 1261356 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:10:56.657193 1261356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:10:56.719527 1261356 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-18 00:10:56.710544884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:10:56.719631 1261356 docker.go:319] overlay module found
	I1218 00:10:56.722690 1261356 out.go:99] Using the docker driver based on user configuration
	I1218 00:10:56.722720 1261356 start.go:309] selected driver: docker
	I1218 00:10:56.722732 1261356 start.go:927] validating driver "docker" against <nil>
	I1218 00:10:56.722847 1261356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:10:56.778102 1261356 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-18 00:10:56.769259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:10:56.778261 1261356 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1218 00:10:56.778593 1261356 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1218 00:10:56.778771 1261356 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 00:10:56.781897 1261356 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-237988 host does not exist
	  To start a cluster, run: "minikube start -p download-only-237988"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-237988
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-812509 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-812509 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.931239433s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1218 00:11:04.912399 1261148 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1218 00:11:04.912434 1261148 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-812509
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-812509: exit status 85 (92.693013ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-597540 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-597540 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │ 18 Dec 25 00:10 UTC │
	│ delete  │ -p download-only-597540                                                                                                                                                                    │ download-only-597540 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │ 18 Dec 25 00:10 UTC │
	│ start   │ -o=json --download-only -p download-only-237988 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-237988 │ jenkins │ v1.37.0 │ 18 Dec 25 00:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 18 Dec 25 00:11 UTC │ 18 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-237988                                                                                                                                                                    │ download-only-237988 │ jenkins │ v1.37.0 │ 18 Dec 25 00:11 UTC │ 18 Dec 25 00:11 UTC │
	│ start   │ -o=json --download-only -p download-only-812509 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-812509 │ jenkins │ v1.37.0 │ 18 Dec 25 00:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/18 00:11:01
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 00:11:01.026502 1261558 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:11:01.026614 1261558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:11:01.026625 1261558 out.go:374] Setting ErrFile to fd 2...
	I1218 00:11:01.026631 1261558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:11:01.026869 1261558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:11:01.027261 1261558 out.go:368] Setting JSON to true
	I1218 00:11:01.028033 1261558 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24807,"bootTime":1765991854,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:11:01.028097 1261558 start.go:143] virtualization:  
	I1218 00:11:01.072828 1261558 out.go:99] [download-only-812509] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:11:01.073120 1261558 notify.go:221] Checking for updates...
	I1218 00:11:01.112683 1261558 out.go:171] MINIKUBE_LOCATION=22186
	I1218 00:11:01.137666 1261558 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:11:01.170880 1261558 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:11:01.199804 1261558 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:11:01.225209 1261558 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 00:11:01.285249 1261558 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 00:11:01.285580 1261558 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:11:01.310961 1261558 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:11:01.311090 1261558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:11:01.368280 1261558 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-18 00:11:01.358895803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:11:01.368381 1261558 docker.go:319] overlay module found
	I1218 00:11:01.413448 1261558 out.go:99] Using the docker driver based on user configuration
	I1218 00:11:01.413511 1261558 start.go:309] selected driver: docker
	I1218 00:11:01.413519 1261558 start.go:927] validating driver "docker" against <nil>
	I1218 00:11:01.413639 1261558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:11:01.469496 1261558 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-18 00:11:01.459536333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:11:01.469665 1261558 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1218 00:11:01.469927 1261558 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1218 00:11:01.470081 1261558 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 00:11:01.509955 1261558 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-812509 host does not exist
	  To start a cluster, run: "minikube start -p download-only-812509"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-812509
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1218 00:11:06.263239 1261148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-816788 --alsologtostderr --binary-mirror http://127.0.0.1:43753 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-816788" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-816788
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006416
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-006416: exit status 85 (79.733875ms)

                                                
                                                
-- stdout --
	* Profile "addons-006416" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006416"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006416
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-006416: exit status 85 (77.322574ms)

                                                
                                                
-- stdout --
	* Profile "addons-006416" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006416"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (138.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-006416 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-006416 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.095726075s)
--- PASS: TestAddons/Setup (138.10s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.68s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 44.690888ms
addons_test.go:878: volcano-admission stabilized in 44.849408ms
addons_test.go:870: volcano-scheduler stabilized in 44.908853ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-rkcrd" [cedfd726-414d-4013-bf7e-2b02ada264f9] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003431006s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-j2579" [668d13f0-e8b3-4ebf-84f5-a58c2fd73593] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005307569s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-cxqz8" [058aff56-49dd-4ef9-894a-7c8401f36aca] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003535469s
addons_test.go:905: (dbg) Run:  kubectl --context addons-006416 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-006416 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-006416 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [b45a84f9-85fa-4bec-84fa-350fd1d2a926] Pending
helpers_test.go:353: "test-job-nginx-0" [b45a84f9-85fa-4bec-84fa-350fd1d2a926] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [b45a84f9-85fa-4bec-84fa-350fd1d2a926] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003558039s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable volcano --alsologtostderr -v=1: (12.050796164s)
--- PASS: TestAddons/serial/Volcano (41.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-006416 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-006416 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.02s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-006416 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-006416 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bb831d20-1c0c-482e-9129-51b238e8d089] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bb831d20-1c0c-482e-9129-51b238e8d089] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003278587s
addons_test.go:696: (dbg) Run:  kubectl --context addons-006416 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-006416 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-006416 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-006416 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.02s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.563915ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-rspbj" [c98bdf9e-09d2-452b-b6ec-ef41635e9499] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003800747s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-m7rvr" [8a1dda30-79b1-4e5e-a443-ec5cc8139ae6] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003034997s
addons_test.go:394: (dbg) Run:  kubectl --context addons-006416 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-006416 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-006416 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.464457022s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 ip
2025/12/18 00:14:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.66s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.591261ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-006416
addons_test.go:334: (dbg) Run:  kubectl --context addons-006416 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-006416 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-006416 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-006416 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [17ea89dc-4430-4f47-a561-c55783905a6b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [17ea89dc-4430-4f47-a561-c55783905a6b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003883917s
I1218 00:15:13.744153 1261148 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-006416 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable ingress-dns --alsologtostderr -v=1: (1.412490252s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable ingress --alsologtostderr -v=1: (8.000734058s)
--- PASS: TestAddons/parallel/Ingress (20.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-nf5b7" [1a56c0e6-43c7-4afd-9788-1290e0c95a00] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00319252s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable inspektor-gadget --alsologtostderr -v=1: (5.818815287s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.738971ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-c79hv" [ada90ec3-1f82-407e-9163-92ebe637decc] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003748341s
addons_test.go:465: (dbg) Run:  kubectl --context addons-006416 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1218 00:14:59.941199 1261148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1218 00:14:59.945356 1261148 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1218 00:14:59.945382 1261148 kapi.go:107] duration metric: took 7.722985ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.73347ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-006416 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-006416 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [653f655b-bab1-4366-9490-a9c0204ed0a4] Pending
helpers_test.go:353: "task-pv-pod" [653f655b-bab1-4366-9490-a9c0204ed0a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [653f655b-bab1-4366-9490-a9c0204ed0a4] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00281107s
addons_test.go:574: (dbg) Run:  kubectl --context addons-006416 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-006416 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-006416 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-006416 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-006416 delete pod task-pv-pod: (1.074889301s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-006416 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-006416 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-006416 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [556cf49f-fb95-4858-a3cd-0b734ff5ea19] Pending
helpers_test.go:353: "task-pv-pod-restore" [556cf49f-fb95-4858-a3cd-0b734ff5ea19] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [556cf49f-fb95-4858-a3cd-0b734ff5ea19] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002799813s
addons_test.go:616: (dbg) Run:  kubectl --context addons-006416 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-006416 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-006416 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.832455011s)
--- PASS: TestAddons/parallel/CSI (59.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-006416 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-006416 --alsologtostderr -v=1: (1.060901246s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-rqcgc" [dadb15dc-824a-49d1-a033-5baa44ca0e2e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-rqcgc" [dadb15dc-824a-49d1-a033-5baa44ca0e2e] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-rqcgc" [dadb15dc-824a-49d1-a033-5baa44ca0e2e] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003677963s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable headlamp --alsologtostderr -v=1: (6.067643112s)
--- PASS: TestAddons/parallel/Headlamp (16.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-lmjmc" [6d90ae5b-df30-4c91-8383-de0b1ff4fd51] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00339666s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-006416 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-006416 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-006416 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [8822e50c-ebeb-49e3-8bb4-216d5dbcf568] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [8822e50c-ebeb-49e3-8bb4-216d5dbcf568] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [8822e50c-ebeb-49e3-8bb4-216d5dbcf568] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003080628s
addons_test.go:969: (dbg) Run:  kubectl --context addons-006416 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 ssh "cat /opt/local-path-provisioner/pvc-61e213a7-ca60-46ec-956f-d764d646f353_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-006416 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-006416 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-c6krm" [45ed1471-9728-459e-b860-1f14b5283c3b] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003618852s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-npbzk" [665b3d59-069a-4611-a9b9-12f399e51cda] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003803551s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-006416 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-006416 addons disable yakd --alsologtostderr -v=1: (5.915544015s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-006416
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-006416: (12.098608356s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006416
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006416
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-006416
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestCertOptions (46.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-993444 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-993444 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (42.608027035s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-993444 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-993444 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-993444 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-993444" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-993444
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-993444: (2.402025182s)
--- PASS: TestCertOptions (46.10s)

                                                
                                    
x
+
TestCertExpiration (222.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-976781 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-976781 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (32.808589348s)
E1218 01:27:57.379731 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:28:25.214510 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:30:04.400787 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-976781 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-976781 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.858301604s)
helpers_test.go:176: Cleaning up "cert-expiration-976781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-976781
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-976781: (2.92949302s)
--- PASS: TestCertExpiration (222.60s)

                                                
                                    
x
+
TestForceSystemdFlag (35.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-902070 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1218 01:26:00.445573 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-902070 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.01550924s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-902070 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-902070" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-902070
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-902070: (2.10188407s)
--- PASS: TestForceSystemdFlag (35.42s)

                                                
                                    
x
+
TestForceSystemdEnv (32.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-984117 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-984117 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.142812228s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-984117 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-984117" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-984117
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-984117: (2.269614006s)
--- PASS: TestForceSystemdEnv (32.72s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.08s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-942383 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-942383 --driver=docker  --container-runtime=containerd: (31.782489717s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-942383"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-942383": (1.434130893s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aJXyCyvG8LC6/agent.1280931" SSH_AGENT_PID="1280932" DOCKER_HOST=ssh://docker@127.0.0.1:33887 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aJXyCyvG8LC6/agent.1280931" SSH_AGENT_PID="1280932" DOCKER_HOST=ssh://docker@127.0.0.1:33887 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aJXyCyvG8LC6/agent.1280931" SSH_AGENT_PID="1280932" DOCKER_HOST=ssh://docker@127.0.0.1:33887 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.357862964s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aJXyCyvG8LC6/agent.1280931" SSH_AGENT_PID="1280932" DOCKER_HOST=ssh://docker@127.0.0.1:33887 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-942383" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-942383
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-942383: (2.516906337s)
--- PASS: TestDockerEnvContainerd (48.08s)

                                                
                                    
x
+
TestErrorSpam/setup (31.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-707240 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-707240 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-707240 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-707240 --driver=docker  --container-runtime=containerd: (31.54482238s)
--- PASS: TestErrorSpam/setup (31.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 unpause
--- PASS: TestErrorSpam/unpause (1.92s)

                                                
                                    
x
+
TestErrorSpam/stop (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 stop: (1.483533998s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-707240 --log_dir /tmp/nospam-707240 stop
--- PASS: TestErrorSpam/stop (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-739047 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1218 00:18:25.218777 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.225173 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.236607 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.258040 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.299473 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.380898 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.542476 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:25.864109 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:26.506280 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:27.788073 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:30.350151 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:18:35.471823 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-739047 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.708227093s)
--- PASS: TestFunctional/serial/StartWithProxy (52.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1218 00:18:41.808859 1261148 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-739047 --alsologtostderr -v=8
E1218 00:18:45.713465 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-739047 --alsologtostderr -v=8: (7.33736145s)
functional_test.go:678: soft start took 7.341295324s for "functional-739047" cluster.
I1218 00:18:49.146568 1261148 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (7.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-739047 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 cache add registry.k8s.io/pause:3.1: (1.294458932s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 cache add registry.k8s.io/pause:3.3: (1.129787175s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 cache add registry.k8s.io/pause:latest: (1.068725469s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-739047 /tmp/TestFunctionalserialCacheCmdcacheadd_local2862061556/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cache add minikube-local-cache-test:functional-739047
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cache delete minikube-local-cache-test:functional-739047
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-739047
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (326.995697ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 kubectl -- --context functional-739047 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-739047 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-739047 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 00:19:06.195203 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:19:47.156730 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-739047 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.570940955s)
functional_test.go:776: restart took 51.571043213s for "functional-739047" cluster.
I1218 00:19:48.337495 1261148 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (51.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-739047 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 logs: (1.476490698s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 logs --file /tmp/TestFunctionalserialLogsFileCmd4253840728/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 logs --file /tmp/TestFunctionalserialLogsFileCmd4253840728/001/logs.txt: (1.555837132s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-739047 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-739047
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-739047: exit status 115 (414.378559ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30313 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-739047 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-739047 delete -f testdata/invalidsvc.yaml: (1.059566523s)
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 config get cpus: exit status 14 (92.768346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 config get cpus: exit status 14 (96.02111ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-739047 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-739047 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1297470: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-739047 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-739047 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (190.455917ms)

                                                
                                                
-- stdout --
	* [functional-739047] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:20:25.650004 1296080 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:20:25.650206 1296080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:20:25.650215 1296080 out.go:374] Setting ErrFile to fd 2...
	I1218 00:20:25.650220 1296080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:20:25.650499 1296080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:20:25.650866 1296080 out.go:368] Setting JSON to false
	I1218 00:20:25.651829 1296080 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25372,"bootTime":1765991854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:20:25.651901 1296080 start.go:143] virtualization:  
	I1218 00:20:25.655410 1296080 out.go:179] * [functional-739047] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:20:25.659392 1296080 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:20:25.659564 1296080 notify.go:221] Checking for updates...
	I1218 00:20:25.665295 1296080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:20:25.668308 1296080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:20:25.671233 1296080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:20:25.674057 1296080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:20:25.676990 1296080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:20:25.680382 1296080 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 00:20:25.681041 1296080 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:20:25.707855 1296080 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:20:25.707975 1296080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:20:25.773241 1296080 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-18 00:20:25.764198685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:20:25.773352 1296080 docker.go:319] overlay module found
	I1218 00:20:25.776391 1296080 out.go:179] * Using the docker driver based on existing profile
	I1218 00:20:25.779181 1296080 start.go:309] selected driver: docker
	I1218 00:20:25.779200 1296080 start.go:927] validating driver "docker" against &{Name:functional-739047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-739047 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:20:25.779309 1296080 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:20:25.782812 1296080 out.go:203] 
	W1218 00:20:25.785709 1296080 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 00:20:25.788716 1296080 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-739047 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-739047 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-739047 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.131899ms)

                                                
                                                
-- stdout --
	* [functional-739047] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:20:31.657524 1297224 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:20:31.657722 1297224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:20:31.657750 1297224 out.go:374] Setting ErrFile to fd 2...
	I1218 00:20:31.657769 1297224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:20:31.658846 1297224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:20:31.659325 1297224 out.go:368] Setting JSON to false
	I1218 00:20:31.660345 1297224 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25378,"bootTime":1765991854,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:20:31.660489 1297224 start.go:143] virtualization:  
	I1218 00:20:31.663596 1297224 out.go:179] * [functional-739047] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1218 00:20:31.667278 1297224 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:20:31.667452 1297224 notify.go:221] Checking for updates...
	I1218 00:20:31.672936 1297224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:20:31.675886 1297224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:20:31.678816 1297224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:20:31.681839 1297224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:20:31.684843 1297224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:20:31.688243 1297224 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 00:20:31.689134 1297224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:20:31.721706 1297224 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:20:31.721834 1297224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:20:31.782895 1297224 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-18 00:20:31.77352155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:20:31.783000 1297224 docker.go:319] overlay module found
	I1218 00:20:31.786034 1297224 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1218 00:20:31.788857 1297224 start.go:309] selected driver: docker
	I1218 00:20:31.788886 1297224 start.go:927] validating driver "docker" against &{Name:functional-739047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-739047 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:20:31.789020 1297224 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:20:31.792736 1297224 out.go:203] 
	W1218 00:20:31.795695 1297224 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 00:20:31.798503 1297224 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-739047 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-739047 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-htpdv" [200752a9-56f9-4df1-9b8a-b9949076750a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-htpdv" [200752a9-56f9-4df1-9b8a-b9949076750a] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.00372785s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31584
functional_test.go:1680: http://192.168.49.2:31584: success! body:
Request served by hello-node-connect-7d85dfc575-htpdv

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31584
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [1c23a4e1-133d-4bc8-8829-0bcc02f273c5] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003515164s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-739047 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-739047 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-739047 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-739047 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [56d5a7b9-9ff3-4395-8c51-20368ba7f3c9] Pending
helpers_test.go:353: "sp-pod" [56d5a7b9-9ff3-4395-8c51-20368ba7f3c9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003853723s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-739047 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-739047 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-739047 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f23b1b9f-ddad-40e2-8772-3464e4cce35a] Pending
helpers_test.go:353: "sp-pod" [f23b1b9f-ddad-40e2-8772-3464e4cce35a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002888262s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-739047 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh -n functional-739047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cp functional-739047:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd165455770/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh -n functional-739047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh -n functional-739047 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1261148/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /etc/test/nested/copy/1261148/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1261148.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /etc/ssl/certs/1261148.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1261148.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /usr/share/ca-certificates/1261148.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12611482.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /etc/ssl/certs/12611482.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12611482.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /usr/share/ca-certificates/12611482.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-739047 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh "sudo systemctl is-active docker": exit status 1 (338.367284ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh "sudo systemctl is-active crio": exit status 1 (333.086839ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 version -o=json --components: (1.4262038s)
--- PASS: TestFunctional/parallel/Version/components (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-739047 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-739047
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-739047
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-739047 image ls --format short --alsologtostderr:
I1218 00:20:39.843277 1298908 out.go:360] Setting OutFile to fd 1 ...
I1218 00:20:39.843484 1298908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:39.843510 1298908 out.go:374] Setting ErrFile to fd 2...
I1218 00:20:39.843530 1298908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:39.843863 1298908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:20:39.844927 1298908 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:39.845111 1298908 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:39.845910 1298908 cli_runner.go:164] Run: docker container inspect functional-739047 --format={{.State.Status}}
I1218 00:20:39.864150 1298908 ssh_runner.go:195] Run: systemctl --version
I1218 00:20:39.864217 1298908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-739047
I1218 00:20:39.883596 1298908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33897 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-739047/id_rsa Username:docker}
I1218 00:20:39.991343 1298908 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-739047 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                    IMAGE                    │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                  │ v1.34.3                               │ sha256:4461da │ 22.8MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3                               │ sha256:7ada8f │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.3                               │ sha256:2f2aa2 │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-739047                     │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest                                │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1                               │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.3                               │ sha256:cf65ae │ 24.6MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/kindest/kindnetd                  │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ public.ecr.aws/nginx/nginx                  │ alpine                                │ sha256:10afed │ 23MB   │
│ registry.k8s.io/etcd                        │ 3.6.5-0                               │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/pause                       │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ docker.io/library/minikube-local-cache-test │ functional-739047                     │ sha256:6d75ac │ 992B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
└─────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-739047 image ls --format table --alsologtostderr:
I1218 00:20:40.749374 1299123 out.go:360] Setting OutFile to fd 1 ...
I1218 00:20:40.749488 1299123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.749493 1299123 out.go:374] Setting ErrFile to fd 2...
I1218 00:20:40.749499 1299123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.749849 1299123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:20:40.750720 1299123 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.750867 1299123 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.751474 1299123 cli_runner.go:164] Run: docker container inspect functional-739047 --format={{.State.Status}}
I1218 00:20:40.780103 1299123 ssh_runner.go:195] Run: systemctl --version
I1218 00:20:40.780160 1299123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-739047
I1218 00:20:40.801977 1299123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33897 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-739047/id_rsa Username:docker}
I1218 00:20:40.920442 1299123 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-739047 image ls --format json --alsologtostderr:
[{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a2
00fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-mini
kube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"20719958"},{"id":"sha256:4461daf6b6af87cf200fc22c
ecc9a2120959aabaf5712ba54ef5b4a6361d1162","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"22804272"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-739047","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:6d75aca4bf4907371f012e5f07ac5372de8ce8437f37e9634c438c36e1e883ad","repo
Digests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-739047"],"size":"992"},{"id":"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"24567639"},{"id":"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"15776215"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-739047 image ls --format json --alsologtostderr:
I1218 00:20:40.446868 1299026 out.go:360] Setting OutFile to fd 1 ...
I1218 00:20:40.447122 1299026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.447152 1299026 out.go:374] Setting ErrFile to fd 2...
I1218 00:20:40.447172 1299026 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.447544 1299026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:20:40.448371 1299026 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.448618 1299026 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.449231 1299026 cli_runner.go:164] Run: docker container inspect functional-739047 --format={{.State.Status}}
I1218 00:20:40.473956 1299026 ssh_runner.go:195] Run: systemctl --version
I1218 00:20:40.474013 1299026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-739047
I1218 00:20:40.515547 1299026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33897 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-739047/id_rsa Username:docker}
I1218 00:20:40.633001 1299026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-739047 image ls --format yaml --alsologtostderr:
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:6d75aca4bf4907371f012e5f07ac5372de8ce8437f37e9634c438c36e1e883ad
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-739047
size: "992"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "15776215"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "24567639"
- id: sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "20719958"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "22804272"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-739047
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-739047 image ls --format yaml --alsologtostderr:
I1218 00:20:40.104458 1298950 out.go:360] Setting OutFile to fd 1 ...
I1218 00:20:40.104701 1298950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.104732 1298950 out.go:374] Setting ErrFile to fd 2...
I1218 00:20:40.104752 1298950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.105323 1298950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:20:40.107279 1298950 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.107446 1298950 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.108068 1298950 cli_runner.go:164] Run: docker container inspect functional-739047 --format={{.State.Status}}
I1218 00:20:40.135885 1298950 ssh_runner.go:195] Run: systemctl --version
I1218 00:20:40.135953 1298950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-739047
I1218 00:20:40.160461 1298950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33897 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-739047/id_rsa Username:docker}
I1218 00:20:40.275893 1298950 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh pgrep buildkitd: exit status 1 (366.1957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr: (3.46164079s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr:
I1218 00:20:40.756494 1299128 out.go:360] Setting OutFile to fd 1 ...
I1218 00:20:40.757836 1299128 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.757893 1299128 out.go:374] Setting ErrFile to fd 2...
I1218 00:20:40.757914 1299128 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:40.758269 1299128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:20:40.759011 1299128 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.761228 1299128 config.go:182] Loaded profile config "functional-739047": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1218 00:20:40.762081 1299128 cli_runner.go:164] Run: docker container inspect functional-739047 --format={{.State.Status}}
I1218 00:20:40.785629 1299128 ssh_runner.go:195] Run: systemctl --version
I1218 00:20:40.785683 1299128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-739047
I1218 00:20:40.808842 1299128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33897 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-739047/id_rsa Username:docker}
I1218 00:20:40.934132 1299128 build_images.go:162] Building image from path: /tmp/build.4008959960.tar
I1218 00:20:40.934255 1299128 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 00:20:40.956681 1299128 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4008959960.tar
I1218 00:20:40.967154 1299128 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4008959960.tar: stat -c "%s %y" /var/lib/minikube/build/build.4008959960.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4008959960.tar': No such file or directory
I1218 00:20:40.967202 1299128 ssh_runner.go:362] scp /tmp/build.4008959960.tar --> /var/lib/minikube/build/build.4008959960.tar (3072 bytes)
I1218 00:20:40.992898 1299128 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4008959960
I1218 00:20:41.001879 1299128 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4008959960 -xf /var/lib/minikube/build/build.4008959960.tar
I1218 00:20:41.012734 1299128 containerd.go:394] Building image: /var/lib/minikube/build/build.4008959960
I1218 00:20:41.012812 1299128 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4008959960 --local dockerfile=/var/lib/minikube/build/build.4008959960 --output type=image,name=localhost/my-image:functional-739047
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ff638f31d5da3bc87ba7a492951c505acf6b36d9f86e04bd6bb840a01fe0a8c5
#8 exporting manifest sha256:ff638f31d5da3bc87ba7a492951c505acf6b36d9f86e04bd6bb840a01fe0a8c5 0.0s done
#8 exporting config sha256:b84e789e019f546f6d063ec408893b9509bcc5854139d6f9313beaebf3f93263 0.0s done
#8 naming to localhost/my-image:functional-739047 done
#8 DONE 0.2s
I1218 00:20:44.128739 1299128 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4008959960 --local dockerfile=/var/lib/minikube/build/build.4008959960 --output type=image,name=localhost/my-image:functional-739047: (3.115899961s)
I1218 00:20:44.128806 1299128 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4008959960
I1218 00:20:44.137479 1299128 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4008959960.tar
I1218 00:20:44.146016 1299128 build_images.go:218] Built localhost/my-image:functional-739047 from /tmp/build.4008959960.tar
I1218 00:20:44.146046 1299128 build_images.go:134] succeeded building to: functional-739047
I1218 00:20:44.146051 1299128 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-739047
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image load --daemon kicbase/echo-server:functional-739047 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 image load --daemon kicbase/echo-server:functional-739047 --alsologtostderr: (1.262993087s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image load --daemon kicbase/echo-server:functional-739047 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 image load --daemon kicbase/echo-server:functional-739047 --alsologtostderr: (1.365743506s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-739047
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image load --daemon kicbase/echo-server:functional-739047 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-739047 image load --daemon kicbase/echo-server:functional-739047 --alsologtostderr: (1.135336387s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "465.117725ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "67.616834ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "508.88564ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "72.125111ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image save kicbase/echo-server:functional-739047 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-739047 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-739047 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-739047 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-739047 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1294618: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image rm kicbase/echo-server:functional-739047 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-739047 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-739047 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [b8b04d96-f9ed-452b-bb90-859444b768f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [b8b04d96-f9ed-452b-bb90-859444b768f6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004440545s
I1218 00:20:12.806484 1261148 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-739047
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 image save --daemon kicbase/echo-server:functional-739047 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-739047
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-739047 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.246.222 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-739047 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-739047 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-739047 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-nfkwr" [a53a0779-0582-436c-b220-cb769ecc72e0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-nfkwr" [a53a0779-0582-436c-b220-cb769ecc72e0] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005254541s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdany-port2704800526/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766017226069984575" to /tmp/TestFunctionalparallelMountCmdany-port2704800526/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766017226069984575" to /tmp/TestFunctionalparallelMountCmdany-port2704800526/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766017226069984575" to /tmp/TestFunctionalparallelMountCmdany-port2704800526/001/test-1766017226069984575
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.509518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1218 00:20:26.433298 1261148 retry.go:31] will retry after 600.392366ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 00:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 00:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 00:20 test-1766017226069984575
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh cat /mount-9p/test-1766017226069984575
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-739047 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [2661df39-c6da-496e-99c5-e7f4e6ef9ed3] Pending
helpers_test.go:353: "busybox-mount" [2661df39-c6da-496e-99c5-e7f4e6ef9ed3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [2661df39-c6da-496e-99c5-e7f4e6ef9ed3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [2661df39-c6da-496e-99c5-e7f4e6ef9ed3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01619148s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-739047 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdany-port2704800526/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 service list -o json
functional_test.go:1504: Took "536.203621ms" to run "out/minikube-linux-arm64 -p functional-739047 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30165
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30165
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (532.892793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1218 00:20:35.051534 1261148 retry.go:31] will retry after 428.772748ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh "sudo umount -f /mount-9p": exit status 1 (376.336712ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-739047 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T" /mount1: exit status 1 (949.590193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1218 00:20:37.777313 1261148 retry.go:31] will retry after 278.383512ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T" /mount2
2025/12/18 00:20:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-739047 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-739047 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-739047
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-739047
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-739047
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 cache add registry.k8s.io/pause:3.1: (1.147243219s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 cache add registry.k8s.io/pause:3.3: (1.130290645s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 cache add registry.k8s.io/pause:latest: (1.069712547s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2905676441/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cache add minikube-local-cache-test:functional-232602
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cache delete minikube-local-cache-test:functional-232602
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-232602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.097346ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi4067502606/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 config get cpus: exit status 14 (71.3888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 config get cpus: exit status 14 (60.274933ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (186.588127ms)

                                                
                                                
-- stdout --
	* [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:49:43.289968 1329869 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:49:43.290089 1329869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.290135 1329869 out.go:374] Setting ErrFile to fd 2...
	I1218 00:49:43.290143 1329869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.290593 1329869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:49:43.290983 1329869 out.go:368] Setting JSON to false
	I1218 00:49:43.291818 1329869 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27130,"bootTime":1765991854,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:49:43.291896 1329869 start.go:143] virtualization:  
	I1218 00:49:43.295665 1329869 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 00:49:43.299410 1329869 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:49:43.299493 1329869 notify.go:221] Checking for updates...
	I1218 00:49:43.305520 1329869 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:49:43.308484 1329869 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:49:43.311402 1329869 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:49:43.314310 1329869 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:49:43.317229 1329869 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:49:43.320739 1329869 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:49:43.321385 1329869 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:49:43.346570 1329869 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:49:43.346689 1329869 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:49:43.406697 1329869 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:49:43.393862886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:49:43.406822 1329869 docker.go:319] overlay module found
	I1218 00:49:43.411834 1329869 out.go:179] * Using the docker driver based on existing profile
	I1218 00:49:43.414613 1329869 start.go:309] selected driver: docker
	I1218 00:49:43.414630 1329869 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:49:43.414733 1329869 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:49:43.418038 1329869 out.go:203] 
	W1218 00:49:43.420815 1329869 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 00:49:43.423677 1329869 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (192.379383ms)

                                                
                                                
-- stdout --
	* [functional-232602] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:49:43.724650 1329993 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:49:43.724800 1329993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.724829 1329993 out.go:374] Setting ErrFile to fd 2...
	I1218 00:49:43.724835 1329993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:49:43.725246 1329993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:49:43.725655 1329993 out.go:368] Setting JSON to false
	I1218 00:49:43.726537 1329993 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27130,"bootTime":1765991854,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 00:49:43.726603 1329993 start.go:143] virtualization:  
	I1218 00:49:43.729825 1329993 out.go:179] * [functional-232602] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1218 00:49:43.732853 1329993 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 00:49:43.732977 1329993 notify.go:221] Checking for updates...
	I1218 00:49:43.738587 1329993 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 00:49:43.741453 1329993 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 00:49:43.744301 1329993 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 00:49:43.747141 1329993 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 00:49:43.749958 1329993 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 00:49:43.753490 1329993 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 00:49:43.754156 1329993 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 00:49:43.785304 1329993 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 00:49:43.785430 1329993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:49:43.841100 1329993 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 00:49:43.829277142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:49:43.841205 1329993 docker.go:319] overlay module found
	I1218 00:49:43.844333 1329993 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1218 00:49:43.847146 1329993 start.go:309] selected driver: docker
	I1218 00:49:43.847177 1329993 start.go:927] validating driver "docker" against &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1218 00:49:43.847299 1329993 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 00:49:43.851013 1329993 out.go:203] 
	W1218 00:49:43.853978 1329993 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 00:49:43.856960 1329993 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh -n functional-232602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cp functional-232602:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3387164111/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh -n functional-232602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh -n functional-232602 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1261148/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /etc/test/nested/copy/1261148/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1261148.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /etc/ssl/certs/1261148.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1261148.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /usr/share/ca-certificates/1261148.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12611482.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /etc/ssl/certs/12611482.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12611482.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /usr/share/ca-certificates/12611482.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh "sudo systemctl is-active docker": exit status 1 (337.214698ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh "sudo systemctl is-active crio": exit status 1 (365.226025ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232602 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-232602
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-232602
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232602 image ls --format short --alsologtostderr:
I1218 00:49:46.850519 1330643 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:46.850688 1330643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:46.850695 1330643 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:46.850700 1330643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:46.851069 1330643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:46.852113 1330643 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:46.852276 1330643 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:46.853110 1330643 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:46.870109 1330643 ssh_runner.go:195] Run: systemctl --version
I1218 00:49:46.870175 1330643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:46.888062 1330643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:49:46.995065 1330643 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232602 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1       │ sha256:a34b34 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1       │ sha256:7e3ace │ 22.4MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kicbase/echo-server               │ functional-232602  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1       │ sha256:3c6ba2 │ 24.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1       │ sha256:abca4d │ 15.4MB │
│ docker.io/library/minikube-local-cache-test │ functional-232602  │ sha256:6d75ac │ 992B   │
│ localhost/my-image                          │ functional-232602  │ sha256:164768 │ 831kB  │
│ registry.k8s.io/etcd                        │ 3.6.6-0            │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232602 image ls --format table --alsologtostderr:
I1218 00:49:50.982628 1331043 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:50.982792 1331043 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:50.982822 1331043 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:50.982842 1331043 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:50.983118 1331043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:50.983793 1331043 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:50.983979 1331043 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:50.984549 1331043 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:51.001656 1331043 ssh_runner.go:195] Run: systemctl --version
I1218 00:49:51.001722 1331043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:51.023508 1331043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:49:51.131523 1331043 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232602 image ls --format json --alsologtostderr:
[{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"15405535"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:6d75aca4bf4907
371f012e5f07ac5372de8ce8437f37e9634c438c36e1e883ad","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-232602"],"size":"992"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:164768fa125038411f5912f105fa73c4a3ff6109d752a9662986211a7beebf0f","repoDigests":[],"repoTags":["localhost/my-image:functional-232602"],"size":"830601"},{"id":"sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54","repoDigests":["registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"24692223"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:l
atest"],"size":"71300"},{"id":"sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"20672157"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-232602"],"size":"2173567"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e","repoDigests":["registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"22432301"},
{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232602 image ls --format json --alsologtostderr:
I1218 00:49:50.741097 1330998 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:50.741233 1330998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:50.741244 1330998 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:50.741250 1330998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:50.741498 1330998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:50.742144 1330998 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:50.742269 1330998 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:50.742754 1330998 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:50.760210 1330998 ssh_runner.go:195] Run: systemctl --version
I1218 00:49:50.760272 1330998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:50.776549 1330998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:49:50.883653 1330998 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-232602 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-232602
size: "2173567"
- id: sha256:6d75aca4bf4907371f012e5f07ac5372de8ce8437f37e9634c438c36e1e883ad
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-232602
size: "992"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:3c6ba27e07aef16adb050828695bfe6206439147b9ade2a2a1777c276bf79a54
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "24692223"
- id: sha256:a34b3483f25ba81aa72f3aeb607a8c756479e8497d8420acbcd2854162ebf84a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "20672157"
- id: sha256:7e3acea3d87aa7ca234514e7f9c10450c7a7f87fc273fc9b5a220e2a2be1ce4e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "22432301"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:abca4d5226620be2218c3971464a1066651a743008c1db8720353446a4b7bbde
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "15405535"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232602 image ls --format yaml --alsologtostderr:
I1218 00:49:47.078089 1330680 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:47.078233 1330680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:47.078245 1330680 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:47.078268 1330680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:47.078588 1330680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:47.079246 1330680 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:47.079420 1330680 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:47.079998 1330680 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:47.098529 1330680 ssh_runner.go:195] Run: systemctl --version
I1218 00:49:47.098592 1330680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:47.115686 1330680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:49:47.219172 1330680 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh pgrep buildkitd: exit status 1 (262.069601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image build -t localhost/my-image:functional-232602 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 image build -t localhost/my-image:functional-232602 testdata/build --alsologtostderr: (2.942030249s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-232602 image build -t localhost/my-image:functional-232602 testdata/build --alsologtostderr:
I1218 00:49:47.565535 1330784 out.go:360] Setting OutFile to fd 1 ...
I1218 00:49:47.565715 1330784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:47.565750 1330784 out.go:374] Setting ErrFile to fd 2...
I1218 00:49:47.565763 1330784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:49:47.566038 1330784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:49:47.566708 1330784 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:47.567323 1330784 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:49:47.567906 1330784 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:49:47.585261 1330784 ssh_runner.go:195] Run: systemctl --version
I1218 00:49:47.585324 1330784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:49:47.602786 1330784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:49:47.707339 1330784 build_images.go:162] Building image from path: /tmp/build.335364436.tar
I1218 00:49:47.707434 1330784 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 00:49:47.715582 1330784 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.335364436.tar
I1218 00:49:47.719178 1330784 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.335364436.tar: stat -c "%s %y" /var/lib/minikube/build/build.335364436.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.335364436.tar': No such file or directory
I1218 00:49:47.719207 1330784 ssh_runner.go:362] scp /tmp/build.335364436.tar --> /var/lib/minikube/build/build.335364436.tar (3072 bytes)
I1218 00:49:47.736598 1330784 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.335364436
I1218 00:49:47.744194 1330784 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.335364436 -xf /var/lib/minikube/build/build.335364436.tar
I1218 00:49:47.752512 1330784 containerd.go:394] Building image: /var/lib/minikube/build/build.335364436
I1218 00:49:47.752584 1330784 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.335364436 --local dockerfile=/var/lib/minikube/build/build.335364436 --output type=image,name=localhost/my-image:functional-232602
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4e3789cf7bbdb29fbdd32c6b809ce11fa10ce260379b54ae81f3080185bf2f28 0.0s done
#8 exporting config sha256:164768fa125038411f5912f105fa73c4a3ff6109d752a9662986211a7beebf0f 0.0s done
#8 naming to localhost/my-image:functional-232602 done
#8 DONE 0.2s
I1218 00:49:50.432509 1330784 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.335364436 --local dockerfile=/var/lib/minikube/build/build.335364436 --output type=image,name=localhost/my-image:functional-232602: (2.679879083s)
I1218 00:49:50.432600 1330784 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.335364436
I1218 00:49:50.440777 1330784 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.335364436.tar
I1218 00:49:50.448283 1330784 build_images.go:218] Built localhost/my-image:functional-232602 from /tmp/build.335364436.tar
I1218 00:49:50.448311 1330784 build_images.go:134] succeeded building to: functional-232602
I1218 00:49:50.448316 1330784 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-232602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image load --daemon kicbase/echo-server:functional-232602 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 image load --daemon kicbase/echo-server:functional-232602 --alsologtostderr: (1.186075373s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image load --daemon kicbase/echo-server:functional-232602 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-232602 image load --daemon kicbase/echo-server:functional-232602 --alsologtostderr: (1.090001395s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-232602
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image load --daemon kicbase/echo-server:functional-232602 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image save kicbase/echo-server:functional-232602 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image rm kicbase/echo-server:functional-232602 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-232602
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 image save --daemon kicbase/echo-server:functional-232602 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-232602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-232602 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "329.160312ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.32875ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "346.857076ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.659483ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2038120171/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.468582ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1218 00:49:40.238880 1261148 retry.go:31] will retry after 652.950291ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2038120171/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-232602 ssh "sudo umount -f /mount-9p": exit status 1 (271.164613ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-232602 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2038120171/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-232602 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-232602 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-232602 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2244806759/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-232602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-232602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-232602
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (138.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1218 00:52:57.379531 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:57.385910 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:57.397291 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:57.418679 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:57.460032 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:57.541429 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:57.702872 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:58.024531 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:58.666034 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:52:59.947338 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:53:02.510142 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:53:07.631847 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:53:17.873313 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:53:25.214455 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:53:38.354878 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:54:19.316669 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m17.153361183s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (138.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 kubectl -- rollout status deployment/busybox: (4.286132102s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-kxv6n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-nf9zf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-vw5bc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-kxv6n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-nf9zf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-vw5bc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-kxv6n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-nf9zf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-vw5bc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-kxv6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-kxv6n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-nf9zf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-nf9zf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-vw5bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 kubectl -- exec busybox-7b57f96db7-vw5bc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 node add --alsologtostderr -v 5: (30.640318766s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5: (1.136704976s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-937615 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.136844471s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --output json --alsologtostderr -v 5
E1218 00:55:04.395033 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 status --output json --alsologtostderr -v 5: (1.084779584s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp testdata/cp-test.txt ha-937615:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2919307743/001/cp-test_ha-937615.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615:/home/docker/cp-test.txt ha-937615-m02:/home/docker/cp-test_ha-937615_ha-937615-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test_ha-937615_ha-937615-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615:/home/docker/cp-test.txt ha-937615-m03:/home/docker/cp-test_ha-937615_ha-937615-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test_ha-937615_ha-937615-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615:/home/docker/cp-test.txt ha-937615-m04:/home/docker/cp-test_ha-937615_ha-937615-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test_ha-937615_ha-937615-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp testdata/cp-test.txt ha-937615-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2919307743/001/cp-test_ha-937615-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m02:/home/docker/cp-test.txt ha-937615:/home/docker/cp-test_ha-937615-m02_ha-937615.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test_ha-937615-m02_ha-937615.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m02:/home/docker/cp-test.txt ha-937615-m03:/home/docker/cp-test_ha-937615-m02_ha-937615-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test_ha-937615-m02_ha-937615-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m02:/home/docker/cp-test.txt ha-937615-m04:/home/docker/cp-test_ha-937615-m02_ha-937615-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test_ha-937615-m02_ha-937615-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp testdata/cp-test.txt ha-937615-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2919307743/001/cp-test_ha-937615-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m03:/home/docker/cp-test.txt ha-937615:/home/docker/cp-test_ha-937615-m03_ha-937615.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test_ha-937615-m03_ha-937615.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m03:/home/docker/cp-test.txt ha-937615-m02:/home/docker/cp-test_ha-937615-m03_ha-937615-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test_ha-937615-m03_ha-937615-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m03:/home/docker/cp-test.txt ha-937615-m04:/home/docker/cp-test_ha-937615-m03_ha-937615-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test_ha-937615-m03_ha-937615-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp testdata/cp-test.txt ha-937615-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2919307743/001/cp-test_ha-937615-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m04:/home/docker/cp-test.txt ha-937615:/home/docker/cp-test_ha-937615-m04_ha-937615.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615 "sudo cat /home/docker/cp-test_ha-937615-m04_ha-937615.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m04:/home/docker/cp-test.txt ha-937615-m02:/home/docker/cp-test_ha-937615-m04_ha-937615-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m02 "sudo cat /home/docker/cp-test_ha-937615-m04_ha-937615-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 cp ha-937615-m04:/home/docker/cp-test.txt ha-937615-m03:/home/docker/cp-test_ha-937615-m04_ha-937615-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 ssh -n ha-937615-m03 "sudo cat /home/docker/cp-test_ha-937615-m04_ha-937615-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 node stop m02 --alsologtostderr -v 5: (12.174397285s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5: exit status 7 (813.542109ms)

                                                
                                                
-- stdout --
	ha-937615
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-937615-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937615-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-937615-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:55:37.374764 1348560 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:55:37.374928 1348560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:55:37.374939 1348560 out.go:374] Setting ErrFile to fd 2...
	I1218 00:55:37.374944 1348560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:55:37.375214 1348560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:55:37.375448 1348560 out.go:368] Setting JSON to false
	I1218 00:55:37.375485 1348560 mustload.go:66] Loading cluster: ha-937615
	I1218 00:55:37.375733 1348560 notify.go:221] Checking for updates...
	I1218 00:55:37.376295 1348560 config.go:182] Loaded profile config "ha-937615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 00:55:37.376331 1348560 status.go:174] checking status of ha-937615 ...
	I1218 00:55:37.377300 1348560 cli_runner.go:164] Run: docker container inspect ha-937615 --format={{.State.Status}}
	I1218 00:55:37.403233 1348560 status.go:371] ha-937615 host status = "Running" (err=<nil>)
	I1218 00:55:37.403258 1348560 host.go:66] Checking if "ha-937615" exists ...
	I1218 00:55:37.403569 1348560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-937615
	I1218 00:55:37.429050 1348560 host.go:66] Checking if "ha-937615" exists ...
	I1218 00:55:37.429430 1348560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:55:37.429489 1348560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-937615
	I1218 00:55:37.449033 1348560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/ha-937615/id_rsa Username:docker}
	I1218 00:55:37.562289 1348560 ssh_runner.go:195] Run: systemctl --version
	I1218 00:55:37.568839 1348560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:55:37.581360 1348560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 00:55:37.649827 1348560 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-18 00:55:37.640209282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 00:55:37.650516 1348560 kubeconfig.go:125] found "ha-937615" server: "https://192.168.49.254:8443"
	I1218 00:55:37.650562 1348560 api_server.go:166] Checking apiserver status ...
	I1218 00:55:37.650618 1348560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:55:37.666141 1348560 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	I1218 00:55:37.675114 1348560 api_server.go:182] apiserver freezer: "9:freezer:/docker/6952735463f4fa7f595b0a5dc35137990b2849aceacb004f8a2331061b56b5a3/kubepods/burstable/pod5819bec065b91f519698e83665e1fc74/edfdb6c2e0309fbca298226d635b7f7c75b79d488ca796c3cb494c677ef184af"
	I1218 00:55:37.675199 1348560 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6952735463f4fa7f595b0a5dc35137990b2849aceacb004f8a2331061b56b5a3/kubepods/burstable/pod5819bec065b91f519698e83665e1fc74/edfdb6c2e0309fbca298226d635b7f7c75b79d488ca796c3cb494c677ef184af/freezer.state
	I1218 00:55:37.683790 1348560 api_server.go:204] freezer state: "THAWED"
	I1218 00:55:37.683819 1348560 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1218 00:55:37.692223 1348560 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1218 00:55:37.692251 1348560 status.go:463] ha-937615 apiserver status = Running (err=<nil>)
	I1218 00:55:37.692261 1348560 status.go:176] ha-937615 status: &{Name:ha-937615 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 00:55:37.692277 1348560 status.go:174] checking status of ha-937615-m02 ...
	I1218 00:55:37.692582 1348560 cli_runner.go:164] Run: docker container inspect ha-937615-m02 --format={{.State.Status}}
	I1218 00:55:37.719920 1348560 status.go:371] ha-937615-m02 host status = "Stopped" (err=<nil>)
	I1218 00:55:37.719941 1348560 status.go:384] host is not running, skipping remaining checks
	I1218 00:55:37.719948 1348560 status.go:176] ha-937615-m02 status: &{Name:ha-937615-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 00:55:37.719967 1348560 status.go:174] checking status of ha-937615-m03 ...
	I1218 00:55:37.720291 1348560 cli_runner.go:164] Run: docker container inspect ha-937615-m03 --format={{.State.Status}}
	I1218 00:55:37.739416 1348560 status.go:371] ha-937615-m03 host status = "Running" (err=<nil>)
	I1218 00:55:37.739439 1348560 host.go:66] Checking if "ha-937615-m03" exists ...
	I1218 00:55:37.739829 1348560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-937615-m03
	I1218 00:55:37.756742 1348560 host.go:66] Checking if "ha-937615-m03" exists ...
	I1218 00:55:37.757079 1348560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:55:37.757129 1348560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-937615-m03
	I1218 00:55:37.774275 1348560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/ha-937615-m03/id_rsa Username:docker}
	I1218 00:55:37.886465 1348560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:55:37.901555 1348560 kubeconfig.go:125] found "ha-937615" server: "https://192.168.49.254:8443"
	I1218 00:55:37.901587 1348560 api_server.go:166] Checking apiserver status ...
	I1218 00:55:37.901628 1348560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 00:55:37.917035 1348560 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	I1218 00:55:37.926653 1348560 api_server.go:182] apiserver freezer: "9:freezer:/docker/52b8c0d322a5052de42f57c3bde101fe2bf3141da6f96a27828750a3caa60cd8/kubepods/burstable/pod3b7aff6fa9842765fbbc4143b6dd450d/4abbb9afb59e035c6d623cf17d9647e2d8ba3e8cd3b8812958f85f7867eb5b44"
	I1218 00:55:37.926740 1348560 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/52b8c0d322a5052de42f57c3bde101fe2bf3141da6f96a27828750a3caa60cd8/kubepods/burstable/pod3b7aff6fa9842765fbbc4143b6dd450d/4abbb9afb59e035c6d623cf17d9647e2d8ba3e8cd3b8812958f85f7867eb5b44/freezer.state
	I1218 00:55:37.935436 1348560 api_server.go:204] freezer state: "THAWED"
	I1218 00:55:37.935467 1348560 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1218 00:55:37.944108 1348560 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1218 00:55:37.944137 1348560 status.go:463] ha-937615-m03 apiserver status = Running (err=<nil>)
	I1218 00:55:37.944146 1348560 status.go:176] ha-937615-m03 status: &{Name:ha-937615-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 00:55:37.944190 1348560 status.go:174] checking status of ha-937615-m04 ...
	I1218 00:55:37.944535 1348560 cli_runner.go:164] Run: docker container inspect ha-937615-m04 --format={{.State.Status}}
	I1218 00:55:37.962401 1348560 status.go:371] ha-937615-m04 host status = "Running" (err=<nil>)
	I1218 00:55:37.962425 1348560 host.go:66] Checking if "ha-937615-m04" exists ...
	I1218 00:55:37.962721 1348560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-937615-m04
	I1218 00:55:37.980910 1348560 host.go:66] Checking if "ha-937615-m04" exists ...
	I1218 00:55:37.981218 1348560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 00:55:37.981267 1348560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-937615-m04
	I1218 00:55:38.000602 1348560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/ha-937615-m04/id_rsa Username:docker}
	I1218 00:55:38.114939 1348560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 00:55:38.133010 1348560 status.go:176] ha-937615-m04 status: &{Name:ha-937615-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node start m02 --alsologtostderr -v 5
E1218 00:55:41.238963 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 node start m02 --alsologtostderr -v 5: (11.949606321s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5: (1.668576004s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.38498776s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 stop --alsologtostderr -v 5: (37.462431976s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 start --wait true --alsologtostderr -v 5: (59.750750776s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 node delete m03 --alsologtostderr -v 5: (9.745397273s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 stop --alsologtostderr -v 5
E1218 00:57:57.379142 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:58:07.466296 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 stop --alsologtostderr -v 5: (36.29714416s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5: exit status 7 (116.471197ms)

                                                
                                                
-- stdout --
	ha-937615
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937615-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-937615-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 00:58:19.525212 1363372 out.go:360] Setting OutFile to fd 1 ...
	I1218 00:58:19.525332 1363372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:58:19.525347 1363372 out.go:374] Setting ErrFile to fd 2...
	I1218 00:58:19.525353 1363372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 00:58:19.525616 1363372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 00:58:19.525815 1363372 out.go:368] Setting JSON to false
	I1218 00:58:19.525847 1363372 mustload.go:66] Loading cluster: ha-937615
	I1218 00:58:19.525951 1363372 notify.go:221] Checking for updates...
	I1218 00:58:19.526286 1363372 config.go:182] Loaded profile config "ha-937615": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 00:58:19.526306 1363372 status.go:174] checking status of ha-937615 ...
	I1218 00:58:19.527142 1363372 cli_runner.go:164] Run: docker container inspect ha-937615 --format={{.State.Status}}
	I1218 00:58:19.544853 1363372 status.go:371] ha-937615 host status = "Stopped" (err=<nil>)
	I1218 00:58:19.544879 1363372 status.go:384] host is not running, skipping remaining checks
	I1218 00:58:19.544886 1363372 status.go:176] ha-937615 status: &{Name:ha-937615 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 00:58:19.544924 1363372 status.go:174] checking status of ha-937615-m02 ...
	I1218 00:58:19.545255 1363372 cli_runner.go:164] Run: docker container inspect ha-937615-m02 --format={{.State.Status}}
	I1218 00:58:19.574643 1363372 status.go:371] ha-937615-m02 host status = "Stopped" (err=<nil>)
	I1218 00:58:19.574663 1363372 status.go:384] host is not running, skipping remaining checks
	I1218 00:58:19.574670 1363372 status.go:176] ha-937615-m02 status: &{Name:ha-937615-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 00:58:19.574690 1363372 status.go:174] checking status of ha-937615-m04 ...
	I1218 00:58:19.575005 1363372 cli_runner.go:164] Run: docker container inspect ha-937615-m04 --format={{.State.Status}}
	I1218 00:58:19.592449 1363372 status.go:371] ha-937615-m04 host status = "Stopped" (err=<nil>)
	I1218 00:58:19.592469 1363372 status.go:384] host is not running, skipping remaining checks
	I1218 00:58:19.592476 1363372 status.go:176] ha-937615-m04 status: &{Name:ha-937615-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1218 00:58:25.080399 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:58:25.214988 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m6.869083979s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (56.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 node add --control-plane --alsologtostderr -v 5
E1218 01:00:04.399307 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 node add --control-plane --alsologtostderr -v 5: (55.12234397s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-937615 status --alsologtostderr -v 5: (1.105066254s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (56.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.146012644s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.15s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-269015 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-269015 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (52.063241111s)
--- PASS: TestJSONOutput/start/Command (52.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-269015 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-269015 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-269015 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-269015 --output=json --user=testUser: (5.972383736s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-773580 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-773580 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.689427ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1328372c-7d9a-4be4-9fd1-b10a8d159892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-773580] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cdcd04e9-b451-4ad5-8174-710ba3d4d12f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22186"}}
	{"specversion":"1.0","id":"a822d952-62e3-4e26-9515-42525ea54297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"85d34dab-9588-4ab6-846a-1758c1a59d3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig"}}
	{"specversion":"1.0","id":"360450ce-09a6-4195-a1be-41dfb5d25980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube"}}
	{"specversion":"1.0","id":"4377cb2b-25df-4cdf-a0a2-efee91003305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9f176894-d4bb-43fe-9713-f41f9f545f5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c638cc31-a760-40fd-91d4-7f0fed0d9a11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-773580" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-773580
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-329564 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-329564 --network=: (37.160419177s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-329564" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-329564
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-329564: (2.232429628s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-040716 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-040716 --network=bridge: (34.472684916s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-040716" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-040716
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-040716: (2.136710403s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.64s)

                                                
                                    
x
+
TestKicExistingNetwork (36.37s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1218 01:02:54.439394 1261148 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1218 01:02:54.455077 1261148 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1218 01:02:54.455156 1261148 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1218 01:02:54.455173 1261148 cli_runner.go:164] Run: docker network inspect existing-network
W1218 01:02:54.471299 1261148 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1218 01:02:54.471336 1261148 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1218 01:02:54.471350 1261148 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1218 01:02:54.471446 1261148 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1218 01:02:54.487222 1261148 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d156d3060c4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:b2:ee:a4:09:a5} reservation:<nil>}
I1218 01:02:54.487513 1261148 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000365e20}
I1218 01:02:54.487534 1261148 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1218 01:02:54.487584 1261148 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1218 01:02:54.544503 1261148 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-528705 --network=existing-network
E1218 01:02:57.378510 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:03:25.214446 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-528705 --network=existing-network: (34.052384593s)
helpers_test.go:176: Cleaning up "existing-network-528705" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-528705
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-528705: (2.181088776s)
I1218 01:03:30.794630 1261148 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.37s)

                                                
                                    
x
+
TestKicCustomSubnet (33.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-863319 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-863319 --subnet=192.168.60.0/24: (31.188831661s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-863319 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-863319" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-863319
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-863319: (2.318281656s)
--- PASS: TestKicCustomSubnet (33.54s)

                                                
                                    
x
+
TestKicStaticIP (35.8s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-320166 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-320166 --static-ip=192.168.200.200: (33.348653723s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-320166 ip
helpers_test.go:176: Cleaning up "static-ip-320166" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-320166
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-320166: (2.284617941s)
--- PASS: TestKicStaticIP (35.80s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-022467 --driver=docker  --container-runtime=containerd
E1218 01:05:04.399725 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-022467 --driver=docker  --container-runtime=containerd: (29.732644124s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-025208 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-025208 --driver=docker  --container-runtime=containerd: (31.832446934s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-022467
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-025208
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-025208" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-025208
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-025208: (2.409302512s)
helpers_test.go:176: Cleaning up "first-022467" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-022467
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-022467: (2.376331525s)
--- PASS: TestMinikubeProfile (67.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-871038 --memory=3072 --mount-string /tmp/TestMountStartserial740229767/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-871038 --memory=3072 --mount-string /tmp/TestMountStartserial740229767/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.168398252s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-871038 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-872975 --memory=3072 --mount-string /tmp/TestMountStartserial740229767/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-872975 --memory=3072 --mount-string /tmp/TestMountStartserial740229767/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.370960963s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-872975 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-871038 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-871038 --alsologtostderr -v=5: (1.72914332s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-872975 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-872975
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-872975: (1.299021745s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-872975
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-872975: (6.911219449s)
--- PASS: TestMountStart/serial/RestartStopped (7.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-872975 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-924790 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-924790 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.281153509s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-924790 -- rollout status deployment/busybox: (3.296226191s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-4r9r5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-pnbwq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-4r9r5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-pnbwq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-4r9r5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-pnbwq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-4r9r5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-4r9r5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-pnbwq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-924790 -- exec busybox-7b57f96db7-pnbwq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-924790 -v=5 --alsologtostderr
E1218 01:07:57.379172 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:08:08.295575 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-924790 -v=5 --alsologtostderr: (28.117427442s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-924790 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp testdata/cp-test.txt multinode-924790:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2151463296/001/cp-test_multinode-924790.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790:/home/docker/cp-test.txt multinode-924790-m02:/home/docker/cp-test_multinode-924790_multinode-924790-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m02 "sudo cat /home/docker/cp-test_multinode-924790_multinode-924790-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790:/home/docker/cp-test.txt multinode-924790-m03:/home/docker/cp-test_multinode-924790_multinode-924790-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m03 "sudo cat /home/docker/cp-test_multinode-924790_multinode-924790-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp testdata/cp-test.txt multinode-924790-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2151463296/001/cp-test_multinode-924790-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790-m02:/home/docker/cp-test.txt multinode-924790:/home/docker/cp-test_multinode-924790-m02_multinode-924790.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790 "sudo cat /home/docker/cp-test_multinode-924790-m02_multinode-924790.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790-m02:/home/docker/cp-test.txt multinode-924790-m03:/home/docker/cp-test_multinode-924790-m02_multinode-924790-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m03 "sudo cat /home/docker/cp-test_multinode-924790-m02_multinode-924790-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp testdata/cp-test.txt multinode-924790-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2151463296/001/cp-test_multinode-924790-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790-m03:/home/docker/cp-test.txt multinode-924790:/home/docker/cp-test_multinode-924790-m03_multinode-924790.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790 "sudo cat /home/docker/cp-test_multinode-924790-m03_multinode-924790.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 cp multinode-924790-m03:/home/docker/cp-test.txt multinode-924790-m02:/home/docker/cp-test_multinode-924790-m03_multinode-924790-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 ssh -n multinode-924790-m02 "sudo cat /home/docker/cp-test_multinode-924790-m03_multinode-924790-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 node stop m03
E1218 01:08:25.215117 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-924790 node stop m03: (1.305341252s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-924790 status: exit status 7 (554.79413ms)

                                                
                                                
-- stdout --
	multinode-924790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-924790-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-924790-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr: exit status 7 (558.027942ms)

                                                
                                                
-- stdout --
	multinode-924790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-924790-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-924790-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:08:26.030760 1416808 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:08:26.030894 1416808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:08:26.030906 1416808 out.go:374] Setting ErrFile to fd 2...
	I1218 01:08:26.030911 1416808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:08:26.031252 1416808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:08:26.031497 1416808 out.go:368] Setting JSON to false
	I1218 01:08:26.031559 1416808 mustload.go:66] Loading cluster: multinode-924790
	I1218 01:08:26.031632 1416808 notify.go:221] Checking for updates...
	I1218 01:08:26.032682 1416808 config.go:182] Loaded profile config "multinode-924790": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:08:26.032716 1416808 status.go:174] checking status of multinode-924790 ...
	I1218 01:08:26.033403 1416808 cli_runner.go:164] Run: docker container inspect multinode-924790 --format={{.State.Status}}
	I1218 01:08:26.054577 1416808 status.go:371] multinode-924790 host status = "Running" (err=<nil>)
	I1218 01:08:26.054606 1416808 host.go:66] Checking if "multinode-924790" exists ...
	I1218 01:08:26.054913 1416808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-924790
	I1218 01:08:26.075883 1416808 host.go:66] Checking if "multinode-924790" exists ...
	I1218 01:08:26.076221 1416808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:08:26.076321 1416808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-924790
	I1218 01:08:26.101106 1416808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34027 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/multinode-924790/id_rsa Username:docker}
	I1218 01:08:26.206036 1416808 ssh_runner.go:195] Run: systemctl --version
	I1218 01:08:26.212373 1416808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:08:26.225534 1416808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:08:26.283810 1416808 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-18 01:08:26.273486603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:08:26.284339 1416808 kubeconfig.go:125] found "multinode-924790" server: "https://192.168.67.2:8443"
	I1218 01:08:26.284387 1416808 api_server.go:166] Checking apiserver status ...
	I1218 01:08:26.284439 1416808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 01:08:26.296796 1416808 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I1218 01:08:26.305219 1416808 api_server.go:182] apiserver freezer: "9:freezer:/docker/1cfe41ae1fc93868c7a8ada5c73f425ee588983f8d424a9e856ee9084132dbfe/kubepods/burstable/podb7cdfe408dd0031ada36fbef79516b1b/49e2523be4a0aa54a36c7e36ae60a0d64c4cd4db4a683b92c023e30d1684b5db"
	I1218 01:08:26.305292 1416808 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1cfe41ae1fc93868c7a8ada5c73f425ee588983f8d424a9e856ee9084132dbfe/kubepods/burstable/podb7cdfe408dd0031ada36fbef79516b1b/49e2523be4a0aa54a36c7e36ae60a0d64c4cd4db4a683b92c023e30d1684b5db/freezer.state
	I1218 01:08:26.314357 1416808 api_server.go:204] freezer state: "THAWED"
	I1218 01:08:26.314387 1416808 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1218 01:08:26.322730 1416808 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1218 01:08:26.322761 1416808 status.go:463] multinode-924790 apiserver status = Running (err=<nil>)
	I1218 01:08:26.322772 1416808 status.go:176] multinode-924790 status: &{Name:multinode-924790 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 01:08:26.322788 1416808 status.go:174] checking status of multinode-924790-m02 ...
	I1218 01:08:26.323104 1416808 cli_runner.go:164] Run: docker container inspect multinode-924790-m02 --format={{.State.Status}}
	I1218 01:08:26.340336 1416808 status.go:371] multinode-924790-m02 host status = "Running" (err=<nil>)
	I1218 01:08:26.340359 1416808 host.go:66] Checking if "multinode-924790-m02" exists ...
	I1218 01:08:26.340690 1416808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-924790-m02
	I1218 01:08:26.356984 1416808 host.go:66] Checking if "multinode-924790-m02" exists ...
	I1218 01:08:26.357302 1416808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 01:08:26.357350 1416808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-924790-m02
	I1218 01:08:26.375799 1416808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34032 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/multinode-924790-m02/id_rsa Username:docker}
	I1218 01:08:26.481785 1416808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 01:08:26.495069 1416808 status.go:176] multinode-924790-m02 status: &{Name:multinode-924790-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1218 01:08:26.495103 1416808 status.go:174] checking status of multinode-924790-m03 ...
	I1218 01:08:26.495467 1416808 cli_runner.go:164] Run: docker container inspect multinode-924790-m03 --format={{.State.Status}}
	I1218 01:08:26.520459 1416808 status.go:371] multinode-924790-m03 host status = "Stopped" (err=<nil>)
	I1218 01:08:26.520482 1416808 status.go:384] host is not running, skipping remaining checks
	I1218 01:08:26.520489 1416808 status.go:176] multinode-924790-m03 status: &{Name:multinode-924790-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-924790 node start m03 -v=5 --alsologtostderr: (7.059519668s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-924790
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-924790
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-924790: (25.230730132s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-924790 --wait=true -v=5 --alsologtostderr
E1218 01:09:20.442359 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-924790 --wait=true -v=5 --alsologtostderr: (52.463182213s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-924790
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-924790 node delete m03: (5.026341583s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 stop
E1218 01:10:04.396191 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-924790 stop: (23.901265848s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-924790 status: exit status 7 (84.086034ms)

                                                
                                                
-- stdout --
	multinode-924790
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-924790-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr: exit status 7 (102.558067ms)

                                                
                                                
-- stdout --
	multinode-924790
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-924790-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:10:21.968400 1425605 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:10:21.968596 1425605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:10:21.968650 1425605 out.go:374] Setting ErrFile to fd 2...
	I1218 01:10:21.968668 1425605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:10:21.968966 1425605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:10:21.969195 1425605 out.go:368] Setting JSON to false
	I1218 01:10:21.969251 1425605 mustload.go:66] Loading cluster: multinode-924790
	I1218 01:10:21.969328 1425605 notify.go:221] Checking for updates...
	I1218 01:10:21.970432 1425605 config.go:182] Loaded profile config "multinode-924790": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:10:21.970479 1425605 status.go:174] checking status of multinode-924790 ...
	I1218 01:10:21.971076 1425605 cli_runner.go:164] Run: docker container inspect multinode-924790 --format={{.State.Status}}
	I1218 01:10:21.991890 1425605 status.go:371] multinode-924790 host status = "Stopped" (err=<nil>)
	I1218 01:10:21.991910 1425605 status.go:384] host is not running, skipping remaining checks
	I1218 01:10:21.991917 1425605 status.go:176] multinode-924790 status: &{Name:multinode-924790 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 01:10:21.991948 1425605 status.go:174] checking status of multinode-924790-m02 ...
	I1218 01:10:21.992256 1425605 cli_runner.go:164] Run: docker container inspect multinode-924790-m02 --format={{.State.Status}}
	I1218 01:10:22.021461 1425605 status.go:371] multinode-924790-m02 host status = "Stopped" (err=<nil>)
	I1218 01:10:22.021489 1425605 status.go:384] host is not running, skipping remaining checks
	I1218 01:10:22.021496 1425605 status.go:176] multinode-924790-m02 status: &{Name:multinode-924790-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-924790 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-924790 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (46.825689136s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-924790 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-924790
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-924790-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-924790-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.909022ms)

                                                
                                                
-- stdout --
	* [multinode-924790-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-924790-m02' is duplicated with machine name 'multinode-924790-m02' in profile 'multinode-924790'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-924790-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-924790-m03 --driver=docker  --container-runtime=containerd: (31.516154908s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-924790
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-924790: exit status 80 (376.720335ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-924790 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-924790-m03 already exists in multinode-924790-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-924790-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-924790-m03: (2.227649313s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.27s)

                                                
                                    
x
+
TestPreload (118.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-982229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-982229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (58.418720245s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-982229 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-982229 image pull gcr.io/k8s-minikube/busybox: (2.246533319s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-982229
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-982229: (5.954710287s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-982229 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1218 01:12:57.378522 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:13:25.214572 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-982229 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.455701418s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-982229 image list
helpers_test.go:176: Cleaning up "test-preload-982229" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-982229
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-982229: (2.451603542s)
--- PASS: TestPreload (118.77s)

                                                
                                    
x
+
TestScheduledStopUnix (109.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-072792 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-072792 --memory=3072 --driver=docker  --container-runtime=containerd: (32.15036308s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072792 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1218 01:14:19.120115 1441506 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:14:19.120860 1441506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:14:19.120879 1441506 out.go:374] Setting ErrFile to fd 2...
	I1218 01:14:19.120885 1441506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:14:19.121310 1441506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:14:19.121680 1441506 out.go:368] Setting JSON to false
	I1218 01:14:19.121804 1441506 mustload.go:66] Loading cluster: scheduled-stop-072792
	I1218 01:14:19.122454 1441506 config.go:182] Loaded profile config "scheduled-stop-072792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:14:19.122572 1441506 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/config.json ...
	I1218 01:14:19.123243 1441506 mustload.go:66] Loading cluster: scheduled-stop-072792
	I1218 01:14:19.123474 1441506 config.go:182] Loaded profile config "scheduled-stop-072792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-072792 -n scheduled-stop-072792
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072792 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1218 01:14:19.609914 1441597 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:14:19.610122 1441597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:14:19.610148 1441597 out.go:374] Setting ErrFile to fd 2...
	I1218 01:14:19.610168 1441597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:14:19.610962 1441597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:14:19.611643 1441597 out.go:368] Setting JSON to false
	I1218 01:14:19.611806 1441597 daemonize_unix.go:73] killing process 1441530 as it is an old scheduled stop
	I1218 01:14:19.611889 1441597 mustload.go:66] Loading cluster: scheduled-stop-072792
	I1218 01:14:19.612310 1441597 config.go:182] Loaded profile config "scheduled-stop-072792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:14:19.612385 1441597 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/config.json ...
	I1218 01:14:19.612570 1441597 mustload.go:66] Loading cluster: scheduled-stop-072792
	I1218 01:14:19.612743 1441597 config.go:182] Loaded profile config "scheduled-stop-072792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1218 01:14:19.640189 1261148 retry.go:31] will retry after 50.062µs: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.640315 1261148 retry.go:31] will retry after 163.444µs: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.641428 1261148 retry.go:31] will retry after 150.452µs: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.642547 1261148 retry.go:31] will retry after 298.73µs: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.643626 1261148 retry.go:31] will retry after 582.013µs: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.644734 1261148 retry.go:31] will retry after 834.372µs: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.645844 1261148 retry.go:31] will retry after 1.571829ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.648046 1261148 retry.go:31] will retry after 2.290362ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.651974 1261148 retry.go:31] will retry after 1.932083ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.654145 1261148 retry.go:31] will retry after 2.475087ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.657360 1261148 retry.go:31] will retry after 3.290892ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.661548 1261148 retry.go:31] will retry after 10.669917ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.674435 1261148 retry.go:31] will retry after 13.797781ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.692780 1261148 retry.go:31] will retry after 23.36142ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.716285 1261148 retry.go:31] will retry after 19.533221ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
I1218 01:14:19.735996 1261148 retry.go:31] will retry after 25.985812ms: open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072792 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-072792 -n scheduled-stop-072792
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-072792
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-072792 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1218 01:14:45.754051 1442290 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:14:45.754285 1442290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:14:45.754291 1442290 out.go:374] Setting ErrFile to fd 2...
	I1218 01:14:45.754296 1442290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:14:45.754524 1442290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:14:45.754771 1442290 out.go:368] Setting JSON to false
	I1218 01:14:45.754859 1442290 mustload.go:66] Loading cluster: scheduled-stop-072792
	I1218 01:14:45.755233 1442290 config.go:182] Loaded profile config "scheduled-stop-072792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1218 01:14:45.755318 1442290 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/scheduled-stop-072792/config.json ...
	I1218 01:14:45.755506 1442290 mustload.go:66] Loading cluster: scheduled-stop-072792
	I1218 01:14:45.755629 1442290 config.go:182] Loaded profile config "scheduled-stop-072792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1218 01:14:47.468618 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1218 01:15:04.397707 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-072792
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-072792: exit status 7 (76.23988ms)

                                                
                                                
-- stdout --
	scheduled-stop-072792
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-072792 -n scheduled-stop-072792
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-072792 -n scheduled-stop-072792: exit status 7 (71.091857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-072792" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-072792
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-072792: (5.002800889s)
--- PASS: TestScheduledStopUnix (109.01s)

                                                
                                    
x
+
TestInsufficientStorage (12.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-447321 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-447321 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.110691783s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"204a6855-feb3-4581-a2f7-f91508c6fc4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-447321] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7245b24c-ec08-42fc-9a36-e1c01fbeb1a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22186"}}
	{"specversion":"1.0","id":"678b0a70-d041-4c5d-91c4-ffe32f21b6cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14dc3fe8-4465-4c12-9831-c3f7fbf383ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig"}}
	{"specversion":"1.0","id":"cb5de6b5-6cf9-4ea3-8372-5fd4b0080fbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube"}}
	{"specversion":"1.0","id":"362d4ea5-79fe-4412-ba1b-372c84e458bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"64838cb7-0460-4b35-8c70-9fb24075b7ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93aaf94f-c670-4354-ade7-6d0454c90bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0c6985c6-e291-45cb-92a3-a6f34db6ce4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c998c661-4ca2-4651-a982-10dfb9e47166","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"65dd11f7-3e15-4255-a555-221ddb87eb2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bb0114ea-4b55-499c-92d8-df6109fcf86a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-447321\" primary control-plane node in \"insufficient-storage-447321\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe304146-0bf3-4dae-a1b3-b0f7ea9870e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765966054-22186 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"79ab837d-f76f-4472-b219-5650cd41a557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"948040ff-8894-492a-80ed-cd7978474cf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-447321 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-447321 --output=json --layout=cluster: exit status 7 (310.327848ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-447321","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-447321","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:15:46.323230 1444115 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-447321" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-447321 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-447321 --output=json --layout=cluster: exit status 7 (315.928086ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-447321","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-447321","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 01:15:46.636030 1444182 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-447321" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
	E1218 01:15:46.647426 1444182 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/insufficient-storage-447321/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-447321" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-447321
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-447321: (1.991635713s)
--- PASS: TestInsufficientStorage (12.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2464084877 start -p running-upgrade-722246 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1218 01:23:25.214443 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2464084877 start -p running-upgrade-722246 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.709909356s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-722246 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-722246 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.38765729s)
helpers_test.go:176: Cleaning up "running-upgrade-722246" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-722246
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-722246: (2.424016491s)
--- PASS: TestRunningBinaryUpgrade (64.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1673832415 start -p missing-upgrade-972102 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1673832415 start -p missing-upgrade-972102 --memory=3072 --driver=docker  --container-runtime=containerd: (1m3.355443737s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-972102
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-972102
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-972102 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-972102 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m16.835076642s)
helpers_test.go:176: Cleaning up "missing-upgrade-972102" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-972102
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-972102: (2.143615468s)
--- PASS: TestMissingContainerUpgrade (144.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-177021 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-177021 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (99.704538ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-177021] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-177021 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-177021 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (50.143525439s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-177021 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-177021 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-177021 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.018654968s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-177021 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-177021 status -o json: exit status 2 (515.23232ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-177021","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-177021
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-177021: (2.345351057s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-177021 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-177021 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.189496861s)
--- PASS: TestNoKubernetes/serial/Start (8.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-177021 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-177021 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.061475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-177021
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-177021: (1.287050204s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-177021 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-177021 --driver=docker  --container-runtime=containerd: (7.379203726s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-177021 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-177021 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.787917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (302s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4113539917 start -p stopped-upgrade-394648 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1218 01:18:25.215106 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4113539917 start -p stopped-upgrade-394648 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.545903768s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4113539917 -p stopped-upgrade-394648 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4113539917 -p stopped-upgrade-394648 stop: (1.234073697s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-394648 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1218 01:20:04.395337 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:22:57.378513 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-394648 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.220068208s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (302.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-394648
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-394648: (2.237529811s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.24s)

                                                
                                    
x
+
TestPause/serial/Start (51.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-778354 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1218 01:24:48.297837 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:25:04.397469 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-778354 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (51.025674122s)
--- PASS: TestPause/serial/Start (51.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-778354 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-778354 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.447395237s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-778354 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-778354 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-778354 --output=json --layout=cluster: exit status 2 (349.875394ms)

                                                
                                                
-- stdout --
	{"Name":"pause-778354","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-778354","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-778354 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-778354 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-778354 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-778354 --alsologtostderr -v=5: (2.570204856s)
--- PASS: TestPause/serial/DeletePaused (2.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-778354
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-778354: exit status 1 (20.208466ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-778354: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-459533 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-459533 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (192.588418ms)

                                                
                                                
-- stdout --
	* [false-459533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 01:26:06.873431 1495532 out.go:360] Setting OutFile to fd 1 ...
	I1218 01:26:06.873568 1495532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:26:06.873581 1495532 out.go:374] Setting ErrFile to fd 2...
	I1218 01:26:06.873587 1495532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1218 01:26:06.873967 1495532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
	I1218 01:26:06.874478 1495532 out.go:368] Setting JSON to false
	I1218 01:26:06.875428 1495532 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":29313,"bootTime":1765991854,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 01:26:06.875520 1495532 start.go:143] virtualization:  
	I1218 01:26:06.879234 1495532 out.go:179] * [false-459533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1218 01:26:06.883118 1495532 out.go:179]   - MINIKUBE_LOCATION=22186
	I1218 01:26:06.883181 1495532 notify.go:221] Checking for updates...
	I1218 01:26:06.888885 1495532 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 01:26:06.891804 1495532 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
	I1218 01:26:06.894825 1495532 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
	I1218 01:26:06.897736 1495532 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 01:26:06.900676 1495532 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 01:26:06.904217 1495532 config.go:182] Loaded profile config "kubernetes-upgrade-675544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1218 01:26:06.904364 1495532 driver.go:422] Setting default libvirt URI to qemu:///system
	I1218 01:26:06.927370 1495532 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1218 01:26:06.927536 1495532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 01:26:06.999151 1495532 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-18 01:26:06.989480888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1218 01:26:06.999276 1495532 docker.go:319] overlay module found
	I1218 01:26:07.002961 1495532 out.go:179] * Using the docker driver based on user configuration
	I1218 01:26:07.005839 1495532 start.go:309] selected driver: docker
	I1218 01:26:07.005868 1495532 start.go:927] validating driver "docker" against <nil>
	I1218 01:26:07.005884 1495532 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 01:26:07.009502 1495532 out.go:203] 
	W1218 01:26:07.012424 1495532 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1218 01:26:07.015317 1495532 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-459533 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-459533" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 18 Dec 2025 01:18:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-675544
contexts:
- context:
cluster: kubernetes-upgrade-675544
user: kubernetes-upgrade-675544
name: kubernetes-upgrade-675544
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-675544
user:
client-certificate: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.crt
client-key: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-459533

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-459533"

                                                
                                                
----------------------- debugLogs end: false-459533 [took: 3.27038076s] --------------------------------
helpers_test.go:176: Cleaning up "false-459533" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-459533
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-207212 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-207212 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m11.609720655s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-207212 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e4db18bd-1ef8-4cac-8ec2-c6c7755fb835] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e4db18bd-1ef8-4cac-8ec2-c6c7755fb835] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003073888s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-207212 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-207212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-207212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.122554045s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-207212 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-207212 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-207212 --alsologtostderr -v=3: (12.111833281s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-207212 -n old-k8s-version-207212
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-207212 -n old-k8s-version-207212: exit status 7 (71.673155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-207212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (26.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-207212 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-207212 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (25.963977599s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-207212 -n old-k8s-version-207212
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (26.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lg8lz" [0c4c7570-eab8-42a3-af83-6968b4c8a01a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lg8lz" [0c4c7570-eab8-42a3-af83-6968b4c8a01a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004305905s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lg8lz" [0c4c7570-eab8-42a3-af83-6968b4c8a01a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004269098s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-207212 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-207212 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-207212 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-207212 -n old-k8s-version-207212
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-207212 -n old-k8s-version-207212: exit status 2 (371.451263ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-207212 -n old-k8s-version-207212
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-207212 -n old-k8s-version-207212: exit status 2 (333.765633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-207212 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-207212 -n old-k8s-version-207212
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-207212 -n old-k8s-version-207212
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1218 01:32:57.379550 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:33:25.214775 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (52.90068867s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-922343 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [db10127f-1ad3-4908-b94c-a35dfd581057] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [db10127f-1ad3-4908-b94c-a35dfd581057] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003427611s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-922343 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-922343 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-922343 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-922343 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-922343 --alsologtostderr -v=3: (12.062362204s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-922343 -n embed-certs-922343
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-922343 -n embed-certs-922343: exit status 7 (71.735662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-922343 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-922343 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (51.085075214s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-922343 -n embed-certs-922343
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ljh4p" [6c70eb81-6e2c-4651-862d-7e3033403c8e] Running
E1218 01:35:04.395094 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003555614s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ljh4p" [6c70eb81-6e2c-4651-862d-7e3033403c8e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003383594s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-922343 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-922343 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-922343 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-922343 -n embed-certs-922343
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-922343 -n embed-certs-922343: exit status 2 (355.805705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-922343 -n embed-certs-922343
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-922343 -n embed-certs-922343: exit status 2 (347.862704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-922343 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-922343 -n embed-certs-922343
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-922343 -n embed-certs-922343
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (51.511377807s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-207500 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [73ed5c67-2f5e-4b2f-8860-9201a3bf4a6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [73ed5c67-2f5e-4b2f-8860-9201a3bf4a6f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00376671s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-207500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-207500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017473705s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-207500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-207500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-207500 --alsologtostderr -v=3: (12.102418833s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500: exit status 7 (81.293937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-207500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1218 01:36:43.269796 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.276156 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.287582 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.308789 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.350124 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.431860 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.593320 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:43.914927 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:44.557027 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:45.839088 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:48.401887 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:36:53.524325 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 01:37:03.766508 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-207500 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (49.245673688s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ln2gp" [469876e5-0ff4-4d28-a14f-c31a3010dedd] Running
E1218 01:37:24.247865 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/old-k8s-version-207212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003278756s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ln2gp" [469876e5-0ff4-4d28-a14f-c31a3010dedd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007077639s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-207500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-207500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-207500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500: exit status 2 (333.412955ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500: exit status 2 (341.907895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-207500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-207500 -n default-k8s-diff-port-207500
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-970975 --alsologtostderr -v=3
E1218 01:41:16.816966 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/default-k8s-diff-port-207500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-970975 --alsologtostderr -v=3: (1.292128247s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-970975 -n no-preload-970975: exit status 7 (69.128423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-970975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-120615 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-120615 --alsologtostderr -v=3: (1.314361402s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-120615 -n newest-cni-120615: exit status 7 (65.795816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-120615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-120615 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.153022562s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-459533 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-459533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dv7n7" [fc564a19-6618-48a3-9400-2abd13163561] Pending
helpers_test.go:353: "netcat-cd4db9dbf-dv7n7" [fc564a19-6618-48a3-9400-2abd13163561] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00400663s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (52.697175974s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-vrd4h" [7997e597-9027-4e35-baff-113674f3c8b0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003473568s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-459533 "pgrep -a kubelet"
I1218 01:56:16.916339 1261148 config.go:182] Loaded profile config "kindnet-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-459533 replace --force -f testdata/netcat-deployment.yaml
I1218 01:56:17.220765 1261148 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cn2bq" [278941e9-2ede-4c1b-bc2c-f0bab8efc6a8] Pending
helpers_test.go:353: "netcat-cd4db9dbf-cn2bq" [278941e9-2ede-4c1b-bc2c-f0bab8efc6a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cn2bq" [278941e9-2ede-4c1b-bc2c-f0bab8efc6a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004811854s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.208076332s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-wpfs6" [01eb822f-f3ce-4ce2-88d8-ab60c9895dba] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-wpfs6" [01eb822f-f3ce-4ce2-88d8-ab60c9895dba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003970296s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-459533 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-459533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-l8hsh" [1bd5c64b-a301-4279-b783-933935c4cfca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-l8hsh" [1bd5c64b-a301-4279-b783-933935c4cfca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008111527s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.198193105s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-459533 "pgrep -a kubelet"
I1218 01:59:34.633550 1261148 config.go:182] Loaded profile config "custom-flannel-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-459533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-v2mzn" [6bacec78-1f42-43ca-8cc9-2b848cc2e4a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-v2mzn" [6bacec78-1f42-43ca-8cc9-2b848cc2e4a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004208758s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.790838418s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1218 02:01:20.875808 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.883007783s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-459533 "pgrep -a kubelet"
I1218 02:01:30.099206 1261148 config.go:182] Loaded profile config "enable-default-cni-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-459533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rp9xm" [f173e1f6-7231-4c06-b79d-725671e6e9f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 02:01:31.118697 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-rp9xm" [f173e1f6-7231-4c06-b79d-725671e6e9f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004146235s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-459533 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m25.543219059s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-b2h6p" [a95d2310-cd80-4d7d-a10d-b51e7e47738e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003968054s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-459533 "pgrep -a kubelet"
I1218 02:02:26.848032 1261148 config.go:182] Loaded profile config "flannel-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-459533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rm6fm" [e3c9feed-da5a-4b56-af21-90850dd72986] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 02:02:27.405635 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:27.411987 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:27.423456 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:27.444901 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:27.486284 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:27.567630 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:27.729079 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:28.050711 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:28.692284 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:29.580745 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/auto-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:29.974452 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-rm6fm" [e3c9feed-da5a-4b56-af21-90850dd72986] Running
E1218 02:02:32.536518 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 02:02:32.562899 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kindnet-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00285861s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1218 02:02:37.658580 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/no-preload-970975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-459533 "pgrep -a kubelet"
I1218 02:03:30.983729 1261148 config.go:182] Loaded profile config "bridge-459533": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-459533 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-swj9v" [c086c4f5-8f41-4c6d-bb79-902fedc84395] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-swj9v" [c086c4f5-8f41-4c6d-bb79-902fedc84395] Running
E1218 02:03:34.382654 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/calico-459533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003713479s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-459533 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-459533 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.17
392 TestNetworkPlugins/group/kubenet 3.52
400 TestNetworkPlugins/group/cilium 3.89
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-000917 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-000917" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-000917
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-618736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-618736
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-459533 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-459533" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 18 Dec 2025 01:18:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-675544
contexts:
- context:
cluster: kubernetes-upgrade-675544
user: kubernetes-upgrade-675544
name: kubernetes-upgrade-675544
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-675544
user:
client-certificate: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.crt
client-key: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-459533

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-459533"

                                                
                                                
----------------------- debugLogs end: kubenet-459533 [took: 3.341313072s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-459533" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-459533
--- SKIP: TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-459533 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-459533" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 18 Dec 2025 01:18:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-675544
contexts:
- context:
cluster: kubernetes-upgrade-675544
user: kubernetes-upgrade-675544
name: kubernetes-upgrade-675544
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-675544
user:
client-certificate: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.crt
client-key: /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/kubernetes-upgrade-675544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-459533

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-459533" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-459533"

                                                
                                                
----------------------- debugLogs end: cilium-459533 [took: 3.721904384s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-459533" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-459533
--- SKIP: TestNetworkPlugins/group/cilium (3.89s)

                                                
                                    
Copied to clipboard